run one scheduler at a time - java

I have two beans which is running the scheduler
<bean id="eventService" class="xxx.xxxx.xxxxx.EventSchedulerImpl">
</bean>
<bean id="UpdateService" class="xxx.xxxx.xxxxx.UpdateSchedulerImpl">
</bean>
I want to make sure only one scheduler is running at time
when EventSchedulerImpl is running UpdateSchedulerImpl should not run also implemented "StatefulJob" on both the scheduler
will that work? do i need to do more?
appericate your idea guys

One way would be to configure a special task executor so that it contains only one thread in its thread pool, and configure its queue capacity so that jobs can be kept "on-hold". So only one task can run at a time with this task executor, and the other task will get queued.
But I don't like this approach. Having a single-threaded task executor seems like a recipe for problems down the road.
What I would do is simply write a wrapper service that calls your target services in the order you need. Then schedule the wrapper service instead.

Related

How to check whether #Schedule annotation (Spring 3.1) cron job is running.

In my application I have one cron job which connects to a FTP server and transfer files, a very simple functionality and it is configured using spring #Schedule annotation with cron expression as a parameter.
It was running fine for few months and then suddenly it stopped, got the connectException.
May be the FTP server was down or something happened which causes the cron thread to stop.
I looked (google) for the reasons but didnt get any ( Nothing much in the logs also - Just the exception name ).It may be a one time thing :)
my question is that can I put some check or watcher on the #Schedule cron job to know whether it is running or not ?
Sorry for my bad explanation/english
Thanks
my question is that can I put some check or watcher on the #Schedule
cron job to know whether it is running or not ?
Basically, you can't.
When you use #Scheduled, Spring uses a ScheduledAnnotationBeanPostProcessor to register the tasks you specify (annotated methods). It registers them with a ScheduledTaskRegistrar. The ScheduledAnnotationBeanPostProcessor is an ApplicationListener<ContextRefreshEvent>. When it receives the ContextRefreshEvent from the ApplicationContext, it schedules the tasks registered in the ScheduledTaskRegistrar.
During this step, these tasks are scheduled with a TaskScheduler which typically wraps a ScheduledExecutorService. If an exception is uncaught in a submitted task, then the task is removed from the ScheduledExecutorService queue.
The TaskScheduler class does not provide a public API to retrieve the scheduled tasks, ie. the ScheduledFuture objects. So you can't use it to find out if your tasks are running or not.
And you probably shouldn't. Develop your tasks, your #Scheduled methods, to be able to withstand an exception being thrown. Some times, obviously, that's not possible. With a network error, for example, you would probably have to restart your application. Without knowing anything else about your application, I would say more logging is your best bet.

Spring Batch: different job launcher for different jobs

I have 2 different jobs (actually more but for simplicity assume 2). Each job can run in parallel with the other job, but each instance of the same job should be run sequentially (otherwise the instances will cannibalize eachother's resources).
Basically I want each of these jobs to have it's own queue of job instances. I figured I could do this using two different thread pooled job launchers (each with 1 thread) and associating a job launcher with each job.
Is there a way to do this that will be respected when launching jobs from the Spring Batch Admin web UI?
There is a way to specify a specific job launcher for a specific job, but the only way I have found to do it is through use of a JobStep.
If you have a job called "specificJob" this will create another job "queueSpecificJob" so when you launch it, either through Quartz or Spring Batch web admin, it will queue up a "specificJob" execution.
<bean id="specificJobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository"/>
<property name="taskExecutor">
<task:executor id="singleThreadPoolExecutor" pool-size="1"/>
</property>
</bean>
<job id="queueSpecificJob">
<step id="specificJobStep">
<job ref="specificJob" job-launcher="specificJobLauncher" job-parameters-extractor="parametersExtractor" />
</step>
</job>
# ahbutfore
How are the jobs triggered? Do you use Quartz trigger by any chance?
If yes, would implementing/extending the org.quartz.StatefulJob interface in all your jobs do the work for you?
See Spring beans configuration here : https://github.com/regunathb/Trooper/blob/master/examples/example-batch/src/main/resources/external/shellTaskletsJob/spring-batch-config.xml. Check source code of org.trpr.platform.batch.impl.spring.job.BatchJob
You can do more complex serialization (including across Spring batch nodes) using a suitable "Leader Election" implementation. I have used Netflix Curator (an Apache Zookeeper recipe) in my project. Some pointers here : https://github.com/regunathb/Trooper/wiki/Useful-Batch-Libraries
Using a shell script you can launch different jobs parallel.
Add an '&' to the end of each command line. The shell will execute them in parallel with it's own execution.

Interrupting a job in quartz cluster

I have a Quartz setup with multiple instances and I want to interrupt a job wherever it is executed. As it was said in documentation, Scheduler.interrupt() method is not cluster aware so I'm looking for some common practice to overcome such limitation.
Well, here are some basics you should use to achieve that.
When running in cluster mode, the information about the currently running jobs are available in the quartz tables. For instance, the q_fired_triggers contains the job being executed.
The first column of this table is the scheduler name being in charge of it. So it is pretty easy to know who is doing what.
Then, if you enable the JMX export of your quartz instances org.quartz.scheduler.jmx.export, the MBeans you will enable a new entry point to remotely manage each scheduler individually. The MBean provides a method boolean interruptJob("JobName", "JobGroup")
Then you "just" need to call this method on the appropriated scheduler instance to effectively interrupt it.
I tried all the process manually and it works fine, just need to be automatized :)
HIH
You are right. The Scheduler.interrupt() does not work in the cluster mode. Let's say that a job trigger is fired by a scheduler in a node but this API is called in another node.
To overcome this, you might use the message broker approach (e.g. JMS, RabbitMQ, etc.) with publish/subscribe model. Instead of calling Scheduler.interrupt(), the client sends a message of this interruption to the message broker, the payload of the message consists of the identity of the job detail i.e JobKey and the name of scheduler ((if there are multiple schedulers used in a node). Then, the message is consumed by all nodes in which the Quartz instance is running, and the nodes find Quartz scheduler by name and then executes Scheduler.interrupt() of the found scheduler with the identity of the job detail taken from the message payload.

Several task executors in spring integration

I am currently working on a project involving lots of asynchronous tasks running independently. I have one spring configuration file.
<task:executor id="taskScheduler" pool-size="5-20">
<task:executor id="specificTaskScheduler" pool-size="5-50" queue-capacity="100">
<!-- integration beans and
several object pools, with a total number of 100 beans created
using CommonsPoolTargetSource -->
I specifically created two executors - one to be used for spring integration needs and custom executor in order for it to run only my tasks feeding it to integration beans with explicit reference. After that I supplied a long running task to be procesed. My EAR runs on WebLogic and I dumped stacktrasce of threads being run and was very disappointed to find out that most of fifty threads in my custom executor wait in a executor's queue for an object to be available from the pool. I did not want CommonsPoolTargetSource to use my executor as a platform for managing its sources. What can I do here? Maybe creating a separate spring file with CommonsTargetSource beans will solve it? Thank you for any ideas.
Thanks guys. Turned out that pool was not a problem, I just had to add more instances to it and slightly increase the pool size with queue capacity set to zero and rejection policy set to an execution of the call in the caller's thread. I have yet to test it under heavy load though.

Shutting down a TaskExecutor in a web application when the web server is shutdown

I have a web application that makes use of a Spring TaskExecutor. The task executor is started when the web application is loaded for the first time and the thread runs the entire life of the web application. Ever since I added this thread to the application, Oracle Application Server will no longer shutdown gracefully. My guess is that OAS is not shutting down because I am not gracefully shutting down the task executor thread anywhere in the code. Where would the best place be to shutdown the task executor in a web application? I am using Java 5 with Spring 2.5.6.
Here is the task executor that I am using:
<bean id="taskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor"/>
SimpleAsyncTaskExecutor has no meaningful state, it does not require shutting down gracefully. Also, I'm not sure what you mean by "the thread runs the entire life of the web application" - which thread are you referring to?
However, the threads that it spawns may be preventing your server from shutting down. SimpleAsyncTaskExecutor uses a ThreadFactory to create its threads, and by default threads that are created are not daemon threads, and so any spawned threads which are still executing when the shutdown is initiated will continue, and the server will only terminate when all of those spawned threads are finished.
As a solution, I suggest using a ThreadPoolTaskExecutor in preference to the SimpleAsyncTaskExecutor. This has a boolean property which controls what happens when a shutdown occurs, and it can force the termination of the thread pool if required.
If you still want to use SimpleAsyncTaskExecutor, then you could inject it with a custom ThreadFactory which creates deamon threads. This will allow the server to shutdown regardless of the state of those spawned threads. This might be dangerous, though, if those threads could leave resources in an unstable state.
Spring provides a bean-friendly implementation of ThreadFactory (CustomizableThreadFactory) which you can inject into the SimpleAsyncTaskExecutor, setting the daemon property to false.
Typically web aplications can react to container shutdown via a ServletContextListener. Implement one and use it to shut down your Executor. Ideally you should couple that with a proper shutdown of the whole Spring lifecycle. I vaguely remember that Spring had something ready-made for that purpose, so it's probably a good idea to check there as well.
You can use the #PreDestroy annotation:
#PreDestroy
protected void shutDown() {
taskExecutor.shutdown();
}

Categories