stop Spring Scheduled execution if it hangs after some fixed time - java

I have used Spring Framework's Scheduled to schedule my job to run at every 5 mins using cron. But sometime my job waits infinitely for an external resource and I can't put timeout there. I can't use fixedDelay as previous process sometime goes in wait infinitely mode and I have to refresh data at every 5 mins.
So I was looking any option in Spring Framework's Scheduled to stop that process/thread after a fixed-time either it run successfully or not.
I have found below setting which initialized ThreadPoolExecutor with 120 seconds for keepAliveTime which I put in #Configuration class. Can anybody tell me will this work as I expected.
#Bean(destroyMethod="shutdown")
public Executor taskExecutor() {
int coreThreads = 8;
int maxThreads = 20;
final ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(
coreThreads, maxThreads, 120L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()
);
threadPoolExecutor.allowCoreThreadTimeOut(true);
return threadPoolExecutor;
}

I'm not sure this will work as expected. Indeed the keepAlive is for IDLE thread and I don't know if your thread waiting for resources is in IDLE. Furthermore it's only when the number of threads is greater than the core so you can't really know when it happen unless you monitor the threadpool.
keepAliveTime - when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.
What you can do is the following:
public class MyTask {
private final long timeout;
public MyTask(long timeout) {
this.timeout = timeout;
}
#Scheduled(cron = "")
public void cronTask() {
Future<Object> result = doSomething();
result.get(timeout, TimeUnit.MILLISECONDS);
}
#Async
Future<Object> doSomething() {
//what i should do
//get ressources etc...
}
}
Don't forget to add #EnableAsync
It's also possible to do the same without #Async by implementing a Callable.
Edit: Keep in mind that it will wait until timeout but the thread running the task won't be interrupted. You will need to call Future.cancel when TimeoutException occurs. And in the task check for isInterrupted() to stop the processing. If you are calling an api be sure that isInterrupted() is checked.

allowCoreThreadTimeOut and timeout setting doesn't help cause it just allow work thread to be ended after some time without work (See javadocs)
You say your job waits infinitely for an external resource. I'am sure it's because you (or some third-party library you using) use sockets with time out infinite-by-default.
Also keep in mind what jvm ignores Thread.interrupt() when it blocked on socket.connect/read.
So find out witch socket library used in your task (and how exactly it used) and change it's default timeout settings.
As example: there is RestTemplate widely used inside Spring (in rest client, in spring social, in spring security OAuth and so on). And there is ClientHttpRequestFactory implementation to create RestTemplate instances. By default, spring use SimpleClientHttpRequestFactory which use JDK sockets. And by default all it's timeouts are infinite.
So find out where exactly you freeze, read it's docs and configure it properly.
P.S. If you don't have enough time and "feeling lucky" try to run your app with setting jvm properties sun.net.client.defaultConnectTimeout and
sun.net.client.defaultReadTimeout to some reasonable values (See docs for more details)

The keepAliveTime is just for cleaning out worker threads that hasn't been needed for a while - it doesn't have any impact on the execution time of the tasks submitted to the executor.
If whatever is taking time respects interrupts you can start a new thread and join it with a timeout, interrupting it if it doesn't complete in time.
public class SomeService {
#Scheduled(fixedRate = 5 * 60 * 1000)
public void doSomething() throws InterruptedException {
Thread taskThread = new TaskThread();
taskThread.start();
taskThread.join(120 * 000);
if(taskThread.isAlive()) {
// We timed out
taskThread.interrupt();
}
}
private class TaskThread extends Thread {
public void run() {
// Do the actual work here
}
}
}

Related

How to debug released connections with Spring Webflux and Tomcat?

When using Spring webflux with Mono or Flux return type, the http connecting thread is parked/released while the connection waits for the response. Thus, the connection is not taking max-connections.
Question: how can I test/proof that the connection is really released while waiting for the response, and not blocking max-connections?
I already enabled DEBUG logging, but that did not show anything regarding this question.
#RestController
public class MyServlet {
#GetMapping("/")
public Mono<String>/Flux<String> test() {
return Mono.just(service.blocking());
}
}
#Service
public class SlowService {
public String blocking() {
TimeUnit.SECONDS.sleep(10);
return "OK";
}
}
Or is that incorrect at all, and I'd have to use:
Mono.fromCallable(() -> service.blocking()).subscribeOn(Schedulers.elastic());
But still, how can I see from the logs that the connection gets parked correctly?
To test, I'm using server.tomcat.max-threads=5. I'm trying to rewrite my blocking service so that those threads are not blocked during the sleep, and thus more than 5 connections can reach my service concurrently.
There are 2 thread pools, the forkjoin pool that will handle all the regular work. Then you have the Schedular pool that will handle scheduled tasks.
return Mono.just(service.blocking());
This will block one of the threads in the ForkJoinPool so less events can be handled by webflux hence slowing down your service.
Mono.fromCallable(() -> service.blocking()).subscribeOn(Schedulers.elastic());
This will assign the task and "offload" it to the scheduler pool of threads so another thread pool will handle this and not hog one of the ForkJoinPool threads.
How to test this? You need to load test your service, or as most people do, trust the framework to do what it is set out to do and trust that the spring team has tested their side of things.

Is there a way to terminate CXF web service call?

I am using CXF to call web service. It is used in a simple way like it is described in the documentation:
HelloService service = new HelloService();
Hello client = service.getHelloHttpPort();
String result = client.sayHi("Joe");
How can I terminate this service call when it takes time?
I found only one related question but this doesn't provide any solution.
How to terminate CXF webservice call within Callable upon Future cancellation
I think this is more of a function of the web server. For example, if you use Jetty to serve your CXF content, then you can set the thread pool to something that'll watch your threads.
ThreadPoolExecutor pool = new ThreadPoolExecutor(...);
ExecutorService svc = new ControlledExecutorService(pool);
server.setThreadPool(new org.eclipse.jetty.util.thread.ExecutorThreadPool(svc));
Then for the custom executor service (sorry, all code typed in the browser directly. I'm on an iPad with no Java. So you'll likely need to make slight adjustments, but the useful parts should be here):
public class ControlledExecutorService implements ExecutorService {
private ExecutorService es;
public ControlledExecutorService(ExecutorService wrapped) {
es = wrapped;
}
#Override
public void execute(final Runnable command) {
Future<Boolean> future = submit(new Callable< Boolean >() {
public Boolean call() throws Exception {
command.run();
return true;
}
});
// Do the proper monitoring of your Future and interrupt it
// using Future.cancel(true) if you need to.
}
}
Be sure to pass true to cancel() so that it sends the interrupt.
Also remember than just like with any thread, just because you send it an interrupt, it doesn't mean it'll comply. You have to do some work in your threads to make sure they're well behaved. Notably, periodically check Thread.currentThread().isInterrupted() and properly handling InterruptedException to pick it up and stop the task gracefully instead of just letting the exception blow everything up.

Globally Control thread pools in ExecutorService

I have many I/O intensive jobs that are triggerred via Jersey webservice calls like this:
https://localhost/rest/execute/job/job1
I want to globally control the number of threads these jobs are using. How do I do this? I have thought of following two options, please suggest if I am in right direction or if there is a better solution.
Approach 1:
Create a wrapper class over ThreadPoolExecutor that provides threads to various services to submit runnables.
class GlobalThreadPool {
int corePoolSize = 1;
int maximumPoolSize = 4;
BlockingQueue<Runnable> q = BlockingQueue<Runnable>(10);
private ExecutorService pool = ThreadPoolExecutor(corePoolSize, maximumPoolSize, ..., q);
public Future<?> run (Runnable task) { pool.submit(task); }
}
Then use this class as ServletContextListener to start and shutdown with webservice.
(+) REST resource gets an instance of this class from ServletContext and submits the requested job.
(-)If a submitted task wants to use additional thread, it won't be able to do that as the run() method won't have access to service context.
Approach 2:
Create a Singleton of GlobalThreadPool. Then we Will not start it via web services listener. However, whenever a job needs it, it will instantiate the class and submit the runnable.
(+) any job has access to the pool irrespective of whether it has access to ServletContext
(-) shutting down the ExecutorService is not tied with webservice
Do you see any particular problem in any of these approaches that I might be missing? Is there any better (read standard) approach to do these things?

Using Spring #Async and ThreadPoolTaskScheduler with pool-size=1

We have a service implementation in our Spring-based web application that increments some statistics counters in the db. Since we don't want to mess up response time for the user we defined them asynchronous using Spring's #Async:
public interface ReportingService {
#Async
Future<Void> incrementLoginCounter(Long userid);
#Async
Future<Void> incrementReadCounter(Long userid, Long productId);
}
And the spring task configuration like this:
<task:annotation-driven executor="taskExecutor" />
<task:executor id="taskExecutor" pool-size="10" />
Now, having the pool-size="10", we have concurrency issues when two threads try two create the same initial record that will contain the counter.
Is it a good idea here to set the pool-size="1" to avoid those conflicts? Does this have any side affects? We have quite a few places that fire async operations to update statistics.
The side-effects would depend on the speed at which tasks are added to the executor in comparison to how quickly a single thread can process them. If the number of tasks being added per second is greater than the number that a single thread can process in a second you run the risk of the queue increasing in size over time until you finally get an out of memory error.
Check out the executor section at this page Task Execution. They state that having an unbounded queue is not a good idea.
If you know that you can process tasks faster than they will be added then you are probably safe. If not, you should add a queue capacity and handle the input thread blocking if the queue reaches this size.
Looking at the two examples you posted, instead of a constant stream of #Async calls, consider updating a JVM local variable upon client requests, and then have a background thread write that to the database every now and then. Along the lines of (mind the semi-pseudo-code):
class DefaultReportingService implements ReportingService {
ConcurrentMap<Long, AtomicLong> numLogins;
public void incrementLoginCounterForUser(Long userId) {
numLogins.get(userId).incrementAndGet();
}
#Scheduled(..)
void saveLoginCountersToDb() {
for (Map.Entry<Long, AtomicLong> entry : numLogins.entrySet()) {
AtomicLong counter = entry.getValue();
Long toBeSummedWithTheValueInDb = counter.getAndSet(0L);
// ...
}
}
}

In Java how to shutdown executorservice when it may submit additional tasks to itself

I have a pipeline of tasks (each task in the pipeline has different parallelism requirements), each task works in a different ExecutorService. Tasks work on packets of data, so if we have 10 datapackets then 10 tasks will be submitted to service1, one task per data packet. Once a task submitted to service1 has actually invoked it may submit a new task to work further on the datapacket to service2, service3 or not.
The following code works fine, i.e.:
shutdown() is invoked on service1 after everything has been submitted to service1
Then awaitTermination() does not return until all the tasks that were submitted before the shutdown() have actually completed running.
-- shutdown() is then invoked on service2 but because all tasks submitted to service1 have completed, and all tasks are submitted to service2 from tasks on service1 all tasks have been submitted to service2 before shutdown() is called on service2.
-- and so on for service3
ExecutorService[] services = {
service1,
service2,
service3};
int count = 0;
for(ExecutorService service: services)
{
service.shutdown();
service.awaitTermination(1, TimeUnit.HOURS);
}
However I have now added a case whereby service2 can break a datapacket into a smaller packet and submit additional tasks on service2 and the code is now failing. The problem is that shutdown() is called on service2 once all the tasks on service1 have completed, but now we want to submit additional service2 tasks from a task running in service2
My questions:
Does shutdown() rerun after all submitted tasks have finished running, or does it return immediately but just doesn't stop already submitted tasks from running ? Update:answered below
How do I solve my new problem ?
"shutdown" simply tells the pool not to accept any more work. It does nothing more. All existing submitted work will be executed as normal. When the queue is drained, the pool will actually destroy all it's threads and terminate.
The problem here is that you're saying that tasks in service2 will submit additional tasks to service2 for processing. There seems to be no way to know when you should actually call a shutdown. But alas, there is an alternative, assuming these smaller packets don't break down further into service.
List<Future<Void>> service2Futures = new ArrayList<Future<Void>>();
service2Futures.add(service2.submit(new Callable<Void>() {
public Void call() throws Exception {
// do your work, submit more stuff to service2
// if you submit Callables, you could use Future.get() to wait on those
// results.
return null;
}
}));
for (Future<Void> future : service2Futures) {
future.get();
}
service2.shutdown();
...
What's going on here is that you're storing Future objects for the top level submitted tasks (you'll have to use Callable and not Runnable). Instead of immediately shutting the pool down after submission, you simply collect up the Future objects. You then wait until they are all done running by cycling through them, and calling get() on each one. The "get()" method blocks until the thread running that task has completed.
At that point, all of the top level tasks are complete, and they will have submitted second level tasks. You can now issue a shutdown. This assumes the second level tasks don't submit more stuff to service2.
This all being said, if you're using java 7, you should consider taking a look at ForkJoinPool and RecursiveTask instead. It probably makes more sense for what you're doing.
ForkJoinPool forkJoinPool = new ForkJoinPool();
RecursiveAction action = new RecursiveAction() {
protected void compute() {
// break down here and build actions
RecursiveAction smallerActions[] = ...;
invokeAll(smallerActions);
}
};
Future<Void> future = forkJoinPool.submit(action);
ExecutorService#shutdown lets already submitted tasks finish whatever they are doing - javadoc extract:
Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down.
This method does not wait for previously submitted tasks to complete execution. Use awaitTermination to do that.
In practice, you can consider that a call to shutdown does several things:
the ExecutorService can't accept new jobs any more
existing threads are terminated once they have finished running
So to answer your questions:
if you have submitted all your tasks to service1 before you call service1.shutdown (if you submit anything after that call you will get an exception anyway), you are fine (i.e. if those tasks submit something to service2 and service2 is not shutdown, they will be executed).
shutdown returns immediately and does not guarantee that already submitted tasks will stop (they could run forever).
The problem you are having is probably linked to how you submit your tasks from one service to another and it seems difficult to solve it with only the information you have given.
The best way would be to include a SSCCE in your question that replicates the behaviour you are seeing.
Instead of shutting down the ExecutorService, you should track the tasks themselves. they can pass around a "job state" object in which they use to keep track of outstanding work, e.g.:
public class JobState {
private int _runningJobs;
public synchronized void start() {
++_runningJobs;
}
public synchronized void finish() {
--_runningJobs;
if(_runningJobs == 0) { notifyAll(); }
}
public synchronized void awaitTermination() {
while(_runningJobs > 0) { wait() }
}
}
public class SomeJob implements Runnable {
private final JobState _jobState;
public void run() {
try {
// ... do work here, possibly submitting new jobs, and pass along _jobState
} finally {
_jobState.finish();
}
}
}
// utility method to start a new job
public static void submitJob(ExecutorService executor, Runnable runnable, JobState jobState) {
// call start _before_ submitting
jobState.start();
executor.submit(runnable);
}
// start main work
JobState state = new JobState();
Runnable firstJob = new SomeJob(state);
submitJob(executor, firstJob, state);
state.awaitTermination();
When you call shutdown it does not wait all task will be finished. Do it as you do with awaitTermination.
But once shutdown was called - new task are blocked. Your executor service reject all new task. For ThreadPoolExecutor rejected task handle in RejectedExecutionHandler. If you specify you custom handler you can process rejected after shutdown task. This is one of workarounds.
Matts question looks like it may well work but Im concerned it may cause new issues.
Ive come up with a solution which works without many code changes for my scenario, although it seems a bit clunky
Ive introduced a new service (service2a) that runs the same task as service2. When a task in service2 wants to submit a small data packet it submits it to service2a rather than service2 so all sub packets are submitted to service2a before service 2 shutdowns. This works for me as the smaller data packets dont need to be broken down into further subpackets and the subpackets idea only applies to service2(a) not any of the other services.

Categories