Clustered Quartz trigger is paused by Camel on shutdown - java

I have following configuration in a quartz.properties
org.quartz.scheduler.instanceId=AUTO
org.quartz.scheduler.instanceName=JobCluster
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.dataSource=myDataSource
org.quartz.dataSource.chargebackDataSource.jndiURL=jdbc/myDataSource
org.quartz.jobStore.isClustered=true
org.quartz.threadPool.threadCount=5
Spring configuration looks like this:
<bean id="quartz2" class="org.apache.camel.component.quartz2.QuartzComponent">
<property name="propertiesFile" value="quartz.properties"/>
</bean>
<route>
<from uri="quartz2://myTrigger?job.name=myJob&job.durability=true&stateful=true&trigger.repeatInterval=60000&trigger.repeatCount=-1"/>
<to uri="bean:myBean?method=retrieve"/>
....
On application shut down the Quartz trigger state changed to PAUSED and after the next start never changed to WAITING again so never fired again.
Is it possible to configure quartz/camel somehow to resume trigger after the application restart?
Camel version is 2.12.0.
Spring version 3.2.4.RELEASE
Actually such behavior contradicts with theit statement at guideline:
If you use Quartz in clustered mode, e.g. the JobStore is clustered. Then the Quartz2 component will not pause/remove triggers when a node is being stopped/shutdown. This allows the trigger to keep running on the other nodes in the cluster.

If you want to dynamic suspend/resume routes as the
org.apache.camel.impl.ThrottlingRoutePolicy
does then its advised to use org.apache.camel.SuspendableService as it allows for fine grained suspend and resume operations. And use the org.apache.camel.util.ServiceHelper to aid when invoking these operations as it support fallback for regular org.apache.camel.Service instances.
For more details please refer RoutePolicy and Quartz Component
Hope this migh help

Related

Quartz + Spring : Configure jobs to run on specific time using JobStore

I'm trying out Quartz scheduler and managed to get it to work with Spring using Maven.
What I need to do is configure Quartz to store the jobs so that for a scheduled time it may execute the job. As far as I know there are two types of triggers in Quartz, Simple and Cron. And I also found out that there is something called JobStore in Quartz. I got it configured to some extent.
Could someone please give me a good reference/references on how to setup Quartz, JobStore? Big help, thank you.
You can have a look at these links
Quartz JobStore with Spring Framework
http://trimplement.com/using-spring-and-quartz-with-jobstore-properties/
If you still cant figure it out then let me know
Just to give you another option, have you try task scheduling of Spring?. Nowadays I change all my old Quartz jobs for this and is easier to configure and you can use the annotations.
http://spring.io/blog/2010/01/05/task-scheduling-simplifications-in-spring-3-0/
You will usually create a Scheduler from a factory class. Quartz can be setup in several ways.
By using the org.quartz.impl.StdSchedulerFactory.getDefaultScheduler(). This will load the quartz.properties file in the Quartz distribution if you have not provided your own.
By specifying your configuration as Key-Value pairs in a quartz.properties file and loading it in org.quartz.impl.StdSchedulerFactory(java.lang.String fileName).getScheduler().
By specifying your configuration in a java.util.Properties as Key-Value pairs and loading it in org.quartz.impl.StdSchedulerFactory(java.util.Properties props).getScheduler().
By using the spring-context-support jar from the Spring Framework and using a higher level abstraction such as org.springframework.scheduling.quartz.SchedulerFactoryBean.
etc.
Quartz will start triggering jobs only when the org.quartz.Scheduler#start() has been invoked. Until this method is called the Scheduler will be in Standby mode.
The Scheduler can be destroyed to release threads by calling org.quartz.Scheduler#shutdown().
Example of Bootstrapping Quartz with Spring
#org.springframework.context.annotation.Configuration
public class QuartzExample {
...
#org.springframework.context.annotation.Bean
public org.springframework.scheduling.quartz.SchedulerFactoryBean schedulerFactory() {
org.springframework.scheduling.quartz.SchedulerFactoryBean factoryBean = new org.springframework.scheduling.quartz.SchedulerFactoryBean();
return factoryBean;
}
}
The bean definition above is enough to perform the following configuration:-
JobFactory - The default is Spring’s org.springframework.scheduling.quartz.AdaptableJobFactory, which supports java.lang.Runnable objects as well as standard Quartz org.quartz.Job instances.
ThreadPool - Default is a Quartz org.quartz.simpl.SimpleThreadPool with a pool size of 10. This is configured through the corresponding Quartz properties.
SchedulerFactory - The default used here is the org.quartz.impl.StdSchedulerFactory, reading in the standard quartz.properties from quartz.jar.
JobStore - The default used is org.quartz.simpl.RAMJobStore which does not support persistence and is not clustered.
Life-Cycle - The org.springframework.scheduling.quartz.SchedulerFactoryBean implements org.springframework.context.SmartLifecycle and org.springframework.beans.factory.DisposableBean which means the life-cycle of the scheduler is managed by the Spring container. The org.quartz.Scheduler#start() is called in the start() implementation of SmartLifecycle after initialization and the org.quartz.Scheduler#shutdown() is called in the destroy() implementation of DisposableBean at application teardown.
You can override the startup behaviour by setting org.springframework.scheduling.quartz.SchedulerFactoryBean().setAutoStartup(false). With this setting you have to manually start the scheduler.
All these default settings can be overridden by the calling the various setter methods on org.springframework.scheduling.quartz.SchedulerFactoryBean.
I have provided a full working example on Github. If you are interested in an example that saves the jobs in a database checkout the HSQLDB branch of the same repository.

ActiveMQ Scheduler Failover with JDBC MasterSlave

I currently have a working two-broker JDBC MasterSlave configuration, and the next step for me is to implement a scheduler with failover. I've looked around and haven't seen any information about this, and was curious to see if this is possible or if I should try a different approach.
Currently, I have the two brokers using the same dataDirectory both within the broker tag and the JDBCPersistenceAdapter tag. However, within that data directory ActiveMQ creates two separate scheduler folders. I cannot seem to force it to use the same one, so failover with scheduling isn't working.
I've also tried the KahaDB approach with the same criteria, and that doesn't seem to work either.
Another option would be for the scheduler information to be pushed to the database (in this case, oracle) and be able to be picked up from there (not sure if possible).
Here is a basic overview of what I need:
Master and slave brokers up and running, using same dataDirectory (lets say, broker1 and broker2)
If I send a request to process messages through master at a certain time and master fails, slave should be able to pick up the scheduler information from master (this is where I'm stuck)
Slave should be processing these messages at the scheduled time
activemq.xml (relevant parts)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="b1" useJmx="true"
persistent="true" schedulerSupport="true">
<!-- kahaDB persistenceAdapter -->
<persistenceAdapter>
<kahaDB directory="{activemq.data}/kahadb" enableIndexWriteAsync="false"
ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true"
checksumJournalFiles="true"/>
</persistenceAdapter>
<!-- JDBC persistenceAdapter -->
<persistenceAdapter>
<jdbcPersistenceAdapter dataDirectory="{activemq.data}" dataSource="#oracle-ds"/>
</persistenceAdapter>
Can someone possibly point me in the right direction? I'm fairly new to ActiveMQ. Thanks in advance!
If anyone is curious, adding the schedulerDirectory property to the broker tag seems to be working fine. So my broker tag in activemq.xml now looks like this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker1"
dataDirectory="{activemq.data}" useJmx="true" persistent="true"
schedulerSupport="true" schedulerDirectory="{activemq.data}/broker1/scheduler"/>
You have probably figured out what you need to do to make this work, but for the sake of other folks like me who was/is looking for the answer. if you're trying to make failover work for scheduled messages with the default kahaDb store (as of v 5.13.2) and a shared file system, you will need to do the following:
Have a folder in the shared file system defined as the dataDirectory attribute in the broker tag. /shared/folder in the example below
Use the same brokerName for all nodes in that master/slave cluster. myBroker1 in the example below.
Example:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="myBroker1"
dataDirectory="/shared/folder"
schedulerSupport="true">

JNDI issue with Websphere 6 UserTrasaction and Quartz Scheduler

I have my Web Application running on WebSphere 6.0 and also there are some Quartz Scheduler Tasks. If I do the lookup like that in hibernate.cfg.xml:
<property name="jta.UserTransaction">java:comp/UserTransaction</property>
It works fine with my Web Application, but any threads initiated by Quartz Timers fail to access the DB using that lookup string. But if I use
<property name="jta.UserTransaction">jta/usertransaction</property>
Then it is the opposite. I will get quartz timers working but I can't do the lookup inside my Web Application.
Is there any way to make them both work with same hibernate configuration?
EDT: here is my quartz.properties file. By the way Quartz Version is 1.5.2.
org.quartz.scheduler.instanceName = TestScheduler
org.quartz.scheduler.instanceId = one
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 4
org.quartz.jobStore.misfireThreshold = 5000
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
I don't know if this is relevant to you but I recently had a similar problem. My issue was remote and local access and altering my design a bit and adding the interface names to my #Local & #Remote annotations worked for me.
I think you miss the transaction management in your quartz.properties.
Something like this:
org.quartz.scheduler.userTransactionURL=jta/usertransaction
org.quartz.scheduler.wrapJobExecutionInUserTransaction=true
The idea is to tell Quartz to wrap the job execution in a transaction and where to get it.

Spring Batch: different job launcher for different jobs

I have 2 different jobs (actually more but for simplicity assume 2). Each job can run in parallel with the other job, but each instance of the same job should be run sequentially (otherwise the instances will cannibalize eachother's resources).
Basically I want each of these jobs to have it's own queue of job instances. I figured I could do this using two different thread pooled job launchers (each with 1 thread) and associating a job launcher with each job.
Is there a way to do this that will be respected when launching jobs from the Spring Batch Admin web UI?
There is a way to specify a specific job launcher for a specific job, but the only way I have found to do it is through use of a JobStep.
If you have a job called "specificJob" this will create another job "queueSpecificJob" so when you launch it, either through Quartz or Spring Batch web admin, it will queue up a "specificJob" execution.
<bean id="specificJobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository"/>
<property name="taskExecutor">
<task:executor id="singleThreadPoolExecutor" pool-size="1"/>
</property>
</bean>
<job id="queueSpecificJob">
<step id="specificJobStep">
<job ref="specificJob" job-launcher="specificJobLauncher" job-parameters-extractor="parametersExtractor" />
</step>
</job>
# ahbutfore
How are the jobs triggered? Do you use Quartz trigger by any chance?
If yes, would implementing/extending the org.quartz.StatefulJob interface in all your jobs do the work for you?
See Spring beans configuration here : https://github.com/regunathb/Trooper/blob/master/examples/example-batch/src/main/resources/external/shellTaskletsJob/spring-batch-config.xml. Check source code of org.trpr.platform.batch.impl.spring.job.BatchJob
You can do more complex serialization (including across Spring batch nodes) using a suitable "Leader Election" implementation. I have used Netflix Curator (an Apache Zookeeper recipe) in my project. Some pointers here : https://github.com/regunathb/Trooper/wiki/Useful-Batch-Libraries
Using a shell script you can launch different jobs parallel.
Add an '&' to the end of each command line. The shell will execute them in parallel with it's own execution.

Interrupting a job in quartz cluster

I have a Quartz setup with multiple instances and I want to interrupt a job wherever it is executed. As it was said in documentation, Scheduler.interrupt() method is not cluster aware so I'm looking for some common practice to overcome such limitation.
Well, here are some basics you should use to achieve that.
When running in cluster mode, the information about the currently running jobs are available in the quartz tables. For instance, the q_fired_triggers contains the job being executed.
The first column of this table is the scheduler name being in charge of it. So it is pretty easy to know who is doing what.
Then, if you enable the JMX export of your quartz instances org.quartz.scheduler.jmx.export, the MBeans you will enable a new entry point to remotely manage each scheduler individually. The MBean provides a method boolean interruptJob("JobName", "JobGroup")
Then you "just" need to call this method on the appropriated scheduler instance to effectively interrupt it.
I tried all the process manually and it works fine, just need to be automatized :)
HIH
You are right. The Scheduler.interrupt() does not work in the cluster mode. Let's say that a job trigger is fired by a scheduler in a node but this API is called in another node.
To overcome this, you might use the message broker approach (e.g. JMS, RabbitMQ, etc.) with publish/subscribe model. Instead of calling Scheduler.interrupt(), the client sends a message of this interruption to the message broker, the payload of the message consists of the identity of the job detail i.e JobKey and the name of scheduler ((if there are multiple schedulers used in a node). Then, the message is consumed by all nodes in which the Quartz instance is running, and the nodes find Quartz scheduler by name and then executes Scheduler.interrupt() of the found scheduler with the identity of the job detail taken from the message payload.

Categories