executor.invokeAll() doesn't finish execution - java

So I'm trying to implement some kind of fork on a "toy language". It's like implementing my own compiler. So when the user does a fork, my program it's supposed to start/simulate some threading.
In my code, I prepare the callables and then I start the execution of the callables but when I use invokeAll my program doesn't terminate, it just doesn't do anything.
I ran the code through debug and it just stops the debugging when I get to invokeAll() but it doesn't terminate or throw error or anything ( I have the invoke inside a try-catch ). I tried with fixed threading pool too. It doesn't do anything
Some code:
// preparing the callables
java.util.List<Callable<MyClass>> callList = prgll.stream()
.map(p -> (Callable<MyClass>) () -> {
return p.oneStep(); //a method from my class
}).collect(Collectors.toList());
//start the execution of the callables
//it should return the list of new created threads
ExecutorService executor = Executors.newSingleThreadExecutor();
java.util.List<MyClass> fff;
try {
fff = executor.invokeAll(callList).stream() // here my program gets blocked but not all the time, only when I call use myFork class
.map(future-> {
try {
return future.get();
} catch (Exception e) {
e.printStackTrace();
throw new CustomException("Error in onestepforall" + e.getMessage());
}
}).filter(p->p != null).collect(Collectors.toList());
} catch (InterruptedException e) {
e.printStackTrace();
throw new CustomException("Error while trying executor" +e.getMessage());
}
Can I debug this to get deeper inside my code to see exactly why invokeAll stays on stand-by?
I also tried changing from a new SingleThread to a fixed pool but it still didn't do anything.

invokeAll() is a blocking method, i.e. it waits until all futures have completed.
.map the list of callables through the executor's submit() instead if you want async task submission.
If your question is why the tasks don't complete then you don't want to debug the thread submitting to the executor but the spawned thread(s) since the tasks are executed on a separate thread. Instead of using a breakpoint you can simply suspend the whole JVM with the attached debugger and then look at individual threads.

I found the problem. p.oneStep() was never completing for fork because there was an infinite loop inside the method. I got rid of it and everything else works fine.

Related

ThreadPoolTaskExecutor blocks feeder thread forever

I'm trying to have a single thread loading records (say from a database). This thread feeds records into a thread pool that processes these individual tasks.
I was expecting this code to work, but it prints number until 60 and then stops.
ThreadPoolTaskExecutor accountLoaderTaskExecutor = new ThreadPoolTaskExecutor();
accountLoaderTaskExecutor.setCorePoolSize(1);
accountLoaderTaskExecutor.setMaxPoolSize(1);
accountLoaderTaskExecutor.initialize();
ThreadPoolTaskExecutor accountDeletionTaskExecutor = new ThreadPoolTaskExecutor();
accountDeletionTaskExecutor.setCorePoolSize(10);
accountDeletionTaskExecutor.setMaxPoolSize(10);
accountDeletionTaskExecutor.setQueueCapacity(50);
accountDeletionTaskExecutor.initialize();
accountLoaderTaskExecutor.submit(() -> {
List<Integer> customerAccountIds = getCustomerAccountIds(); // return 1000s integers
customerAccountIds.forEach(id -> {
accountDeletionTaskExecutor.submit(() -> {
try {
System.out.println(id);
Thread.sleep(500);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
});
});
});
Thread.currentThread().join();
I was expecting the accountLoaderTaskExecutor thread to block on accountDeletionTaskExecutor.submit but then continue as records are being processed until it exhausts all customerAccountIds.
If that comment "return 1000s integers" means that you have thousands of ids, then the code you have written will queue thousands of tasks to the threadpool and then proceed to Thread.currentThread().join() which does absolutely nothing, because joining the current thread to itself is meaningless: you can only join different threads.
Then, I presume you exit the application, and the default behavior is probably to terminate all threadpools on application exit. (I am not sure about that, I am speculating.)
The 60 tasks that you observe getting started probably manage to start while the remaining thousands of tasks are being queued.
To verify that this is what is happening, you can try replacing that call to join() with a Thread.sleep( 1000 ) and see if you observe more tasks being started.
If that is the case, then one approach to solve your problem might be to add proper graceful threadpool shut-down (of the kind that waits for all queued tasks to complete first.)

How do I kill a Java Future?

The service I'm working on uses a Future to run multiple tasks in parallel; each task can take up to a minute to complete. However, it seems the external lib is buggy, since in some occasions (2% of the time) it doesn't return. In those cases I would like to give a 2-minute wait time, and if it hasn't returned, I would like to kill the future and re-schedule again later (it will succeed eventually).
How do I kill the Future?
private void run() {
ExecutorService queue = Executors.newFixedThreadPool(1);
Future<Integer> f = queue.submit(new MyTask());
Thread.sleep(500);
try {
Integer r = f.get(120, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
e.printStackTrace();
f.cancel(true);
}
// Bad future still running here and I need it dead.
}
private class MyTask implements Callable<Integer> {
private ExternalLibrary extlib = new ExternalLibrary();
#Override
public Integer call() throws Exception {
// step 1 - do a few things
// step 2 - process data
Integer val = this.extlib.doSomething(); // here's the problem!
// step 3 - do other things
return val;
}
}
I can see the external lib running and consuming CPU (for 24 hours)... doing nothing. It's a simple task that should never take more than 60 seconds to complete its work.
So far, I'm killing the whole JVM once a day to get rid of this issue, but I'm sure there must be a better way. I wonder how app servers (Tomcat, JBoss, Weblogic, etc.) do it with rogue processes.
Even if you could kill the future hanging in the buggy library, this does likely not solve your problem. The library might still have acquired some resource which will not be properly clean up. This might be memory allocations, open file handles or even monitors leaving some internal data structures in an inconsistent state. Eventually you will likely be back at the point where you have to restart your JVM.
There's basically two options: Fix or isolate it.
Fix: try to get the library fixed. If this is not possible,
isolate: isolate the library into a external service your application depends on. E.g. implement a REST API for calling the library and wrap everything up into a Docker image. Automate restarting of the Docker container as needed.
As others have mentioned, stopping a Future is cooperative, meaning, the thread running async must respond to cancellation from the waiting thread. If the async task isn't cooperative simply invoking shutdown or shutdownNow won't be enough as the underlying TPE will just interrupt the threads.
If you have no control over extlib, and extlib is not cooperative, I see two options
You can stop the thread currently running. This can cause issues if the thread being stopped currently is holding a lock or some other resource. It can lead to interesting bugs that could be hard to dissect.
This could take some more work, but you could run the async task as a separate process entirely. The TPE can still run the process and, on interruption, can destroy the process. This obviously has more interesting issues like how to load the process with required input.
If I understand your requirement correctly & based on your requirement (i.e. 1 thread), you can look for shutting down executorservice in 2 phases, code is available in java doc of executorservice:
try {
Integer r = f.get(120, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
e.printStackTrace();
//f.cancel(true); you can omit this call if you wish.
shutdownAndAwaitTermination(queue);
} ... //remaining method code
void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
try {
// Wait a while for existing tasks to terminate
if (!pool.awaitTermination(60, TimeUnit.SECONDS)) {
pool.shutdownNow(); // Cancel currently executing tasks
// Wait a while for tasks to respond to being cancelled
if (!pool.awaitTermination(60, TimeUnit.SECONDS))
System.err.println("Pool did not terminate");
}
} catch (InterruptedException ie) {
// (Re-)Cancel if current thread also interrupted
pool.shutdownNow();
// Preserve interrupt status
Thread.currentThread().interrupt();
}
}
Please read documentation about shutdown() , shutdownNow() how they behaves because it clearly mentions there is no 100% guarantee that tasks / executorservice will get stopped if its running.
Unfortunately if the external library is not co-operating to thread interrupts, there is nothing you can do to kill the Thread running the task managed by the ExecutorService.
An alternative that I can think of is to run the offending code as a separate process. Using ProcessBuilder and Process, your task can effectively control (or) even kill the offending process after a timeout (https://docs.oracle.com/javase/9/docs/api/java/lang/Process.html#destroyForcibly--).
Also see https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessBuilder.html
#joe That is correct. Unless you have control over the thread and inside the thread you can't kill it.
this.extlib.doSomething();
if this line starts a thread then we need to get hold of that thread to kill it as we don't have reference to stop it.
In your code, the call:
this.extlib.doSomething()
must be synchronous, because if it is not, the code lost sense. With that assumption, you can try:
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<Integer> future = executor.submit(new MyTask());
try {
future.get(120, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
future.cancel(true);
} finally {
executor.shutdownNow();
}
If this doesn't stop the doSomethig work is because this doSomething function is opening other threads to do the work. In that case, maybe you can check the threads that are running with:
Thread.getAllStackTraces()
And try to kill the right one...

How to be sure that a #scheduled task terminates?

inside a Spring web application I have a scheduled task that is called every five minutes.
#Scheduled(fixedDelay = 300000)
public void importDataTask()
{
importData(); //db calls, file manipulations, etc..
}
Usually the task runs smoothly for days, but sometimes happens that the example method importaData()will not terminate, so importDataTask()will not be called again and everything will be blocked until I restart the application.
The question is: is there a feasibile method to be sure that a method will not be indefinitely blocked (waybe waiting for a resource, or something else)?
The question is: is there a feasibile method to be sure that a method
will not be indefinitely blocked (waybe waiting for a resource, or
something else)?
If the scheduling cannot be planned at a precise regular interval, you should maybe not use a fixed delay but use two conditions : delay + last execution done.
You could schedule a task which checks if the two conditions are met and if it the case, you run the important processing. Otherwise, it waits for the next schedule.
In this way, you should not be blocked. You could wait for some time if the task exceeds the fixed delay. If it is a problem because the fixed delay is often exceeded, you should probably not use a fixed delay or so you should increase sensitively it in order that it is less common.
Here an example (writing without editor. Sorry if any mistake) :
private boolean isLastImportDataTaskFinished;
#Scheduled(fixedDelay = 300000)
public void importDataTaskManager(){
if (isLastImportDataTaskFinished()){
new Thread(new ImportantDataProcessing())).start();
}
else{
// log the problem if you want
}
}
private isLastImportDataTaskFinished(){
// to retrieve this information, you can do as you want : use a variable
// in this class or a data in database,file...
// here a simple implementation
return isLastImportDataTaskFinished;
}
Runnable class :
public class ImportantDataProcessing implements Runnable{
public void run(){
importData(); //db calls, file manipulations, etc..
}
}
Comment:
But if I run it as a thread how can I kill it if I find it's exceeding
the time limit since I don't have any reference to it (in the idea of
using a second task to determine the stuck state)?
You can use an ExecutorService (you have a question about it here : How to timeout a thread).
Here a very simple example :
ExecutorService executor = Executors.newSingleThreadExecutor();
Future future = executor.submit(new ImportantDataProcessing());
try {
future.get(100, TimeUnit.SECONDS);
}
catch (InterruptedException e) {
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
catch (TimeoutException e) {
// the timeout to handle but other exceptions should be handled :)
e.printStackTrace();
}
executor.shutdown();
If interesting information may be returned by ImportantDataProcessing processing , you can use a task instead of a runnable instance to type the future.
Firstly, sure. There are many feasibile methods to remind you if the process is blocked, such as log/message/email which embed in you code.
Secondly, it is decided by if you want it block or not. If block is not you intention, new thread or timeout may be you choice.

Best practice for interrupting threads that take longer than a threshold

I am using the Java ExecutorService framework to submit callable tasks for execution.
These tasks communicate with a web service and a web service timeout of 5 mins is applied.
However I've seen that in some cases the timeout is being ignored and thread 'hangs' on an API call - hence, I want to cancel all the tasks that take longer than say, 5 mins.
Currently, I have a list of futures and I iterate through them and call future.get until all tasks are complete. Now, I've seen that the future.get overloaded method takes a timeout and throws a timeout when the task doesnt complete in that window. So I thought of an approach where I do a future.get() with timeout and in case of TimeoutException I do a future.cancel(true) to make sure that this task is interrupted.
My main questions
1. Is the get with a timeout the best way to solve this issue?
2. Is there the possibility that I'm waiting with the get call on a task that hasnt yet been placed on the thread pool(isnt an active worker). In that case I may be terminating a thread that, when it starts may actually complete within the required time limit?
Any suggestions would be deeply appreciated.
Is the get with a timeout the best way to solve this issue?
This will not suffice. For instance, if your task is not designed to response to interruption, it will keep on running or be just blocked
Is there the possibility that I'm waiting with the get call on a task that hasnt yet been placed on the thread pool(isnt an active worker). In that case I may be terminating a thread that, when it starts may actually complete within the required time limit?
Yes, You might end up cancelling as task which is never scheduled to run if your thread-pool is not configured properly
Following code snippet could be one of the way you can make your task responsive to interruption when your task contains Non-interruptible Blocking. Also it does not cancel the task which are not scheduled to run. The idea here is to override interrupt method and close running tasks by say closing sockets, database connections etc. This code is not perfect and you need to make changes as per requirements, handle exceptions etc.
class LongRunningTask extends Thread {
private Socket socket;
private volatile AtomicBoolean atomicBoolean;
public LongRunningTask() {
atomicBoolean = new AtomicBoolean(false);
}
#Override
public void interrupt() {
try {
//clean up any resources, close connections etc.
socket.close();
} catch(Throwable e) {
} finally {
atomicBoolean.compareAndSet(true, false);
//set the interupt status of executing thread.
super.interrupt();
}
}
public boolean isRunning() {
return atomicBoolean.get();
}
#Override
public void run() {
atomicBoolean.compareAndSet(false, true);
//any long running task that might hang..for instance
try {
socket = new Socket("0.0.0.0", 5000);
socket.getInputStream().read();
} catch (UnknownHostException e) {
} catch (IOException e) {
} finally {
}
}
}
//your task caller thread
//map of futures and tasks
Map<Future, LongRunningTask> map = new HashMap<Future, LongRunningTask>();
ArrayList<Future> list = new ArrayList<Future>();
int noOfSubmittedTasks = 0;
for(int i = 0; i < 6; i++) {
LongRunningTask task = new LongRunningTask();
Future f = execService.submit(task);
map.put(f, task);
list.add(f);
noOfSubmittedTasks++;
}
while(noOfSubmittedTasks > 0) {
for(int i=0;i < list.size();i++) {
Future f = list.get(i);
LongRunningTask task = map.get(f);
if (task.isRunning()) {
/*
* This ensures that you process only those tasks which are run once
*/
try {
f.get(5, TimeUnit.MINUTES);
noOfSubmittedTasks--;
} catch (InterruptedException e) {
} catch (ExecutionException e) {
} catch (TimeoutException e) {
//this will call the overridden interrupt method
f.cancel(true);
noOfSubmittedTasks--;
}
}
}
}
execService.shutdown();
Is the get with a timeout the best way to solve this issue?
Yes it is perfectly fine to get(timeout) on a Future object, if the task that the future points to is already executed it will return immediately. If the task is yet to be executed or is being executed then it will wait until timeout and is a good practice.
Is there the possibility that I'm waiting with the get call on a task
that hasnt yet been placed on the thread pool(isnt an active worker)
You get Future object only when you place a task on the thread pool so it is not possible to call get() on a task without placing it on thread pool. Yes there is a possibility that the task has not yet been taken by a free worker.
The approach that you are talking about is ok. But most importantly before setting a threshold on the timeout you need to know what is the perfect value of thread pool size and timiout for your environment. Do a stress testing which will reveal whether the no of worker threads that you configured as part of Threadpool is fine or not. And this may even reduce the timeout value. So this test is most important i feel.
Timeout on get is perfectly fine but you should add to cancel the task if it throws TimeoutException. And if you do the above test properly and set your thread pool size and timeout value to ideal than you may not even need to cancel tasks externally (but you can have this as backup). And yes sometimes in canceling a task you may end up canceling a task which is not yet picked up by the Executor.
You can of course cancel a Task by using
task.cancel(true)
It is perfectly legal. But this will interrupt the thread if it is "RUNNING".
If the thread is waiting to acquire an intrinsic lock then the "interruption" request has no effect other than setting the thread's interrupted status. In this case you cannot do anything to stop it. For the interruption to happen, the thread should come out from the "blocked" state by acquiring the lock it was waiting for (which may take more than 5 mins). This is a limitation of using "intrinsic locking".
However you can use explicit lock classes to solve this problem. You can use "lockInterruptibly" method of the "Lock" interface to achieve this. "lockInterruptibly" will allow the thread to try to acquire a lock while remaining responsive to the interruption. Here is a small example to achieve that:
public void workWithExplicitLock()throws InterruptedException{
Lock lock = new ReentrantLock();
lock.lockInterruptibly()();
try {
// work with shared object state
} finally {
lock.unlock();
}
}

How to stop immediately a task which is started using an ExecutorService?

I have tried many different ways to immediately stop a task which is started using an ExecutorService, with no luck.
Future<Void> future = executorService.submit(new Callable<Void>(
public Void call () {
... do many other things here..
if(Thread.currentThread.isInterrupted()) {
return null;
}
... do many other things here..
if(Thread.currentThread.isInterrupted()) {
return null;
}
}
));
if(flag) { // may be true and directly cancel the task
future.cancel(true);
}
Sometimes I need to cancel the task immediately after it is started, you may be curious why I want to do this, well you may imagine a senario that a user accidentally hits the "Download" button to start a "Download Task" and he immediately wants to cancel the action because it was just an accidental click.
The problem is that after calling future.cancel(true), the task is not stopped and Thread.currentThread.isInterrupted() still returns false and I have no way to know the task was stopped from inside the call() method.
I am thinking of setting a flag like cancelled=true after calling future.cancel(true) and checking that flag constantly in the call() method, I think this is a hack and the code could be very ugly because the user can start many tasks at the same moment.
Is there a more elegant way of achieving what I want?
EDIT:
This really drives me mad. I have spent almost a day on this problem now. I will try to explain a little bit more for the problem I am facing.
I do the following to start 5 tasks, each task will start 5 threads to download a file. and then I stop all 5 tasks immediately. For all of the method calls below, i start a thread(ExecutorService.submit(task)) to make it asynchronous as you can tell from the suffixes of the methods.
int t1 = startTaskAysnc(task1);
int t2 = startTaskAysnc(task2);
int t3 = startTaskAysnc(task3);
int t4 = startTaskAysnc(task4);
int t5 = startTaskAysnc(task5);
int stopTaskAysnc(t1);
int stopTaskAysnc(t2);
int stopTaskAysnc(t3);
int stopTaskAysnc(t4);
int stopTaskAysnc(t5);
in startTaskAysnc(), I simply initiate a socket connection to remote server to get the size of the file(and this certainly is gonna take some time), after successfully getting the fileSize, I will start 5 threads to download different parts of the file. like the following(the code is simplified to make it more easy to follow):
public void startTaskAsync(DownloadTask task) {
Future<Void> future = executorService.submit(new Callable<Void>(
public Void call () {
// this is a synchronous call
int fileSize = getFileSize();
System.out.println(Thread.currentThread.isInterrupted());
....
Future<Void> futures = new Future<Void>[5];
for (int i = 0; i < futures.length; ++i) {
futures[i] = executorService.submit(new Callable<Void>(){...});
}
for (int i = 0; i < futures.length; ++i) {
futures[i].get(); // wait for it to complete
}
}
));
synchronized (mTaskMap) {
mTaskMap.put(task.getId(), future);
}
}
public void stopTaskAysnc(int taskId) {
executorService.execute(new Runnable(){
Future<Void> future = mTaskMap.get(taskId);
future.cancel(true);
});
}
I noticed a weird behavior that after I called stopTaskAsync() for all 5 tasks, there would always be at least one task that got stopped(i.e. Thread.currentThread.isInterrupted() return true), and the other 4 tasks kept running.
And I have tried your suggestions by setting an UncaughtExceptionHandler, but nothing comes out from that.
EDIT:
The problem was solved in this link: Can't stop a task which is started using ExecutorService
Well, the javadoc of Future.cancel(boolean) says that:
If the task has already started, then the mayInterruptIfRunning
parameter determines whether the thread executing this task should be
interrupted in an attempt to stop the task.
so it's quite certain that the thread that executes the task is interrupted. What could have happened is that one of the
... do many other things here..
is accidentally clearing the Thread's interrupted status without performing the desired
handling. If you'll put a breakpoint in Thread.interrupt() you might catch the criminal.
Another option I can think of is that the task terminates before capturing the interrupt, either because it's completed or thrown some uncaught exception. Call Future.get() to determine that. Anyway, as asdasd mentioned, it is a good practice to set an UncaughtExceptionHandler.
What you're doing is very dangerous: you're using a thread pool to execute tasks (which I'll call downloaders), and the same thread pool to execute tasks which
wait for the downloaders to finish (which I'll call controllers)
or ask the controllers to stop
This means that if the core number of threads is reached after the controller has started, the downloaders will be put in the queue of the thread pool, and the controller thread will never finish. Similarly, if the core number of threads is reached when you execute the cancelling task, this cancelling task will be put in the queue, and won't execute until some other task is finished.
You should probably use a thread pool for downloaders, another one for controllers, and the current thread to cancel the controllers.
I think you'll find solution here. The main point is that cancel method raises InterruptedException. Please check if your thread is still running after cancellation? Are you sure that you didn't try to interrupt finished thread? Are you sure that your thread didn't fail with any other Exception? Try to set up UncaughtExceptionHandler.

Categories