I get this error in the log file while thread is running, i don't know where this error occurs since the threads didn't stop and process data with no issues and only my problem that this error appears multiple times in the log file
java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#419a9977
rejected from
java.util.concurrent.ScheduledThreadPoolExecutor#2522cdb9[Terminated,
pool size = 0, active threads = 0, queued tasks = 0, completed tasks =
2123929]
I did some research, i found that in some places i shutdown the task, but that did not happen at all.
Without looking at the code we can't really inform you more about the problem. If you look at the exception then it clearly states that the threads have been terminated and their active count is zero. It seems even after shutting down the executor you are trying to process more code using executors. Are you trying to add more task after the call executor.shutdown()
As per docs, New tasks submitted in method execute(Runnable) will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated. In either case, the execute method invokes the RejectedExecutionHandler.rejectedExecution(Runnable, ThreadPoolExecutor) method of its RejectedExecutionHandler.
Look at the doc here: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html
Old question, but I had the issue and the comment of #lambad saves my day. I had this piece of code:
ttlExecutorService.schedule(new Runnable() {
public void run() {
...
...
...
}
}, 1, TimeUnit.MINUTES);
ttlExecutorService.shutdown();
I removed the shutdown call and exception was no longer thrown
Related
Working with Java 11 and Spring 2.1.6.RELEASE.
Im expierencing an issue where if I send a few records to the topic that this kafka consumer consumes from, everything works as planned. However If I produce A lot of records (a hundred or so) then the executor queues the processing but never actually does the processing. Am I using the executor wrong? I dont think its a kafka issue. Is there a way to query the executor to debug this?
#Configuration
public class ExecutorServiceConfig {
#Bean
public ExecutorService createExecutorService() {
return Executors.newFixedThreadPool(10);
}
}
#KafkaListener(topics = "${kafka.consumer.topic.name}",
groupId = "${spring.kafka.consumer.group-id}")
public void consume(PayrollDto message) {
log.info("Consumed message for processing:" + message); // this log is hit for all records
executor.execute(new ConsumerExecutor(message));
}
private class ConsumerExecutor implements Runnable {
PayrollDto message;
public ConsumerExecutor(PayrollDto message) {
this.message = message;
}
#Override
public void run() {
log.info("Beginning processing for payroll:" + this.message); // this log is hit for only some records
processPayrollList(this.message);
log.info("Finished processing for payroll:" + this.message);
}
}
It looks like you are using pure Java SE ExecutorService classes rather than Spring-specific TaskExecutor classes.
There is not enough information to diagnose this properly. (You haven't provide any clear evidence that the tasks have been "forgotten". Your reported evidence is that they are not executed. The "forgotten tasks" is only one of a number of possible explanations.)
The only explanations that I can think of are:
Your processPayrollList method is not terminating in some circumstances. It could be deadlocking, going into an infinite loop, waiting forever on some external service and so on.
If enough (i.e. 10) tasks failed to terminate, then you would run out of threads in the pool, and no more tasks would be processed. That is consistent with your evidence.
Something in your application is replacing executor with a different ExecutorService object.
Something in your application is removing tasks from the queue without executing them.
A build or deployment "process" issue; e.g. the code you are running is different to the code you are looking at. (It happens.)
An unreported bug in the Java 11 class library.
Of these, (1) is the most likely (IMO). Explanations (2) and (3) involve application code that I assume you would have mentioned in the question. I would treat (5) as implausible ... unless you can provide some clear evidence in the form of a minimal reproducible example.
Am I using the executor wrong?
It doesn't look like it from the code you have shown us.
Is there a way to query the executor to debug this?
You could take a thread stack dump (e.g. using the jstack command) and look at the status of the threads in the pool.
You could also cast executor to ThreadPoolExecutor and use that API to look at the queue length, the number of active threads and so on.
Note that this is not due to the ExecutorService being shut down. If that happened, you would get RejectedExecutionException in calls to execute.
Im reading files from SQS in an unbounded stream. As I read each file I want to submit it to a second queue for processing. I can simultaneously process several files so I put these into threads and want to block further reads from the queue when all threads are in use.
To that end I used this:
ExecutorService executorService =
new ThreadPoolExecutor(
maxThreads, // core thread pool size
maxThreads, // maximum thread pool size
1, // time to wait before resizing pool
TimeUnit.MINUTES,
new ArrayBlockingQueue<Runnable>(maxThreads, true),
new ThreadPoolExecutor.CallerRunsPolicy());
Where maxThreads = 2.
Files are read in blocks of ten and processed as such:
for (Message msg : resp.getMessages()) {
Gson g = new Gson();
MessageBody messageBody = g.fromJson(msg.getBody(), MessageBody.class);
MessageRecords messageRecords = g.fromJson(messageBody.getMessage(), MessageRecords.class);
List<MessageRecords.Record> records = messageRecords.getRecords();
executorService.submit(new Runnable() {
#Override
public void run() {
... do some work based on file type
}
});
Watching the thread count Im seeing it climb steadily until the system runs out of memory, closes the job with a unable to create native thread exception. After this the VM(AWS) doesn't accept SSH logins until it gets stopped / restarted.
It seems like there must be a step where a given thread is released / cleaned up but Im not seeing where it should happen.
What am I doing wrong?
Edit:
yes, run() does finish and exit
nothing else interacts with these threads. the run() method gets a file, looks at the type and calls a fn() based on the type. Function parses the file and returns. Then run() is finished.
The issue is my use of ThreadPoolExecutor. Specifically where it appear(s/ed) in the app process. I was creating a new thread pool for each loop through a block of SQS messages and not closing it afterwards. Moving this creation outside the loop and reusing the same block fixes the problem.
So - a big hairy UFU.
I have a problem, I am trying to execute a Task in ScheduledExecutorService, and I am executing the task with the command:
updateTagDataHandle = scheduler.scheduleWithFixedDelay(updateTagDataRunnable, 500, 500, TimeUnit.MILLISECONDS);
and after several success runs it stops. the task itself takes a few seconds I checked with println that it go to the end of the task with no errors, and I print any exception and didnt see an exception in the end. I need it to continue run infinite number of times.
any help would be appreciated
edit
my code for initializing the task scheduler was:
scheduler = Executors.newScheduledThreadPool(1);
so the corePoolSize = 1 so there only one thread alive and the task share this on thread. but setting the threadPool to be more than one is not helping and still there seems to be only one thread active.
same question here:
Executors Factory method newScheduledThreadPool always returns the same Thread pool
and here:
Why doesn't ScheduledExecutorService spawn threads as needed?
any help would be appreciated
edit:
didnt find a solution so used the scheduler custom thread creation :
scheduler = Executors.newScheduledThreadPool(7, new ThreadFactory() {
#Override
public Thread newThread(Runnable r) {
return new Thread(r);
}
});
Try changing the initial delay to 1000 milliseconds.
I was trying the same code in my android app and this solved the problem.
I have tried many different ways to immediately stop a task which is started using an ExecutorService, with no luck.
Future<Void> future = executorService.submit(new Callable<Void>(
public Void call () {
... do many other things here..
if(Thread.currentThread.isInterrupted()) {
return null;
}
... do many other things here..
if(Thread.currentThread.isInterrupted()) {
return null;
}
}
));
if(flag) { // may be true and directly cancel the task
future.cancel(true);
}
Sometimes I need to cancel the task immediately after it is started, you may be curious why I want to do this, well you may imagine a senario that a user accidentally hits the "Download" button to start a "Download Task" and he immediately wants to cancel the action because it was just an accidental click.
The problem is that after calling future.cancel(true), the task is not stopped and Thread.currentThread.isInterrupted() still returns false and I have no way to know the task was stopped from inside the call() method.
I am thinking of setting a flag like cancelled=true after calling future.cancel(true) and checking that flag constantly in the call() method, I think this is a hack and the code could be very ugly because the user can start many tasks at the same moment.
Is there a more elegant way of achieving what I want?
EDIT:
This really drives me mad. I have spent almost a day on this problem now. I will try to explain a little bit more for the problem I am facing.
I do the following to start 5 tasks, each task will start 5 threads to download a file. and then I stop all 5 tasks immediately. For all of the method calls below, i start a thread(ExecutorService.submit(task)) to make it asynchronous as you can tell from the suffixes of the methods.
int t1 = startTaskAysnc(task1);
int t2 = startTaskAysnc(task2);
int t3 = startTaskAysnc(task3);
int t4 = startTaskAysnc(task4);
int t5 = startTaskAysnc(task5);
int stopTaskAysnc(t1);
int stopTaskAysnc(t2);
int stopTaskAysnc(t3);
int stopTaskAysnc(t4);
int stopTaskAysnc(t5);
in startTaskAysnc(), I simply initiate a socket connection to remote server to get the size of the file(and this certainly is gonna take some time), after successfully getting the fileSize, I will start 5 threads to download different parts of the file. like the following(the code is simplified to make it more easy to follow):
public void startTaskAsync(DownloadTask task) {
Future<Void> future = executorService.submit(new Callable<Void>(
public Void call () {
// this is a synchronous call
int fileSize = getFileSize();
System.out.println(Thread.currentThread.isInterrupted());
....
Future<Void> futures = new Future<Void>[5];
for (int i = 0; i < futures.length; ++i) {
futures[i] = executorService.submit(new Callable<Void>(){...});
}
for (int i = 0; i < futures.length; ++i) {
futures[i].get(); // wait for it to complete
}
}
));
synchronized (mTaskMap) {
mTaskMap.put(task.getId(), future);
}
}
public void stopTaskAysnc(int taskId) {
executorService.execute(new Runnable(){
Future<Void> future = mTaskMap.get(taskId);
future.cancel(true);
});
}
I noticed a weird behavior that after I called stopTaskAsync() for all 5 tasks, there would always be at least one task that got stopped(i.e. Thread.currentThread.isInterrupted() return true), and the other 4 tasks kept running.
And I have tried your suggestions by setting an UncaughtExceptionHandler, but nothing comes out from that.
EDIT:
The problem was solved in this link: Can't stop a task which is started using ExecutorService
Well, the javadoc of Future.cancel(boolean) says that:
If the task has already started, then the mayInterruptIfRunning
parameter determines whether the thread executing this task should be
interrupted in an attempt to stop the task.
so it's quite certain that the thread that executes the task is interrupted. What could have happened is that one of the
... do many other things here..
is accidentally clearing the Thread's interrupted status without performing the desired
handling. If you'll put a breakpoint in Thread.interrupt() you might catch the criminal.
Another option I can think of is that the task terminates before capturing the interrupt, either because it's completed or thrown some uncaught exception. Call Future.get() to determine that. Anyway, as asdasd mentioned, it is a good practice to set an UncaughtExceptionHandler.
What you're doing is very dangerous: you're using a thread pool to execute tasks (which I'll call downloaders), and the same thread pool to execute tasks which
wait for the downloaders to finish (which I'll call controllers)
or ask the controllers to stop
This means that if the core number of threads is reached after the controller has started, the downloaders will be put in the queue of the thread pool, and the controller thread will never finish. Similarly, if the core number of threads is reached when you execute the cancelling task, this cancelling task will be put in the queue, and won't execute until some other task is finished.
You should probably use a thread pool for downloaders, another one for controllers, and the current thread to cancel the controllers.
I think you'll find solution here. The main point is that cancel method raises InterruptedException. Please check if your thread is still running after cancellation? Are you sure that you didn't try to interrupt finished thread? Are you sure that your thread didn't fail with any other Exception? Try to set up UncaughtExceptionHandler.
Using Java and App Server I deploy application that has a thread executer.
During un-deploy I request the executer to shutdown. This successfully cancels all the tasks. However via VisualVM I can still see a thread that represents the executer itself and it is in the wait sate. I don't keep any references to the executer as the whole application get undeployed. So if i repeat the deployment-undeployment cycle multiple times I can see how the threads number grows.
How do I get rid of them?
UPDATE:
code example
here is the code:
public void startScheduler()
{
if (scheduledExecutor == null)
{
scheduledExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("My ScheduledExecutor"));
processFuture = scheduledExecutor.scheduleAtFixedRate(new Runnable()
{
#Override
public void run()
{
startProcessor();
}
}, 0, 84600, TimeUnit.SECONDS);
}
}
public void stopScheduler()
{
if (processFuture != null)
{
processFuture.cancel(true);
processFuture = null;
}
if (scheduledExecutor != null)
{
try
{
scheduledExecutor.shutdownNow();
scheduledExecutor.awaitTermination(10, TimeUnit.SECONDS);
}
catch (InterruptedException ignored)
{}
finally
{
scheduledExecutor = null;
}
}
}
Could you please elaborate what you mean with "a thread that represents the executer itself". What is it's name/id/threadgroup? I don't think executor service creates such a thread.
Executors create new threads (using the configurable ThreadFactory). A Thread automatically inherits some properties of its parent, that is the Thread.currentThread(). The most problematic part of this behavior in a web application scenario with deploy/undeploy cycles however is the Thread's ContextClassLoader, which is inherited from the parent's thread. If your ContextClassLoader holds on to classes from within your web application archive, then the spawned Executors Thread will also have a reference to this ClassLoader. If the code which is executed by the Executor has e.g. ThreadLocals with classes from the WebappClassLoader, you may experience a ClassLoader leak problem.
The Executor needs to be stopped explicitly, using the method shutdown, otherwise it will hang around like you have found out. You can see from the javadoc for Executors.newSingleThreadExecutor that it includes a worker thread.
The javadoc for shutdownNow says:
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt(), so if any tasks mask or fail to respond to interrupts, they may never terminate.
If the task being executed doesn't respond to interrupts (swallows InterruptedExceptions without ever exiting), then that would cause your executor to never get shutdown. Any non-daemon threads that don't get shutdown explicitly will hang around and keep the JVM from exiting. That can be a fun one to debug.