Quartz cron Job not starting - java

I'm using quartz scheduler for scheduling a spring batch job.
The application starts without any exception but it never fires any job.
Just let me to explain my scenario:
If I run the job(with scheduler) through a main method using MapJobRepositoryFactoryBean it works perfectly, but after integration of the scheduler with spring-mvc web app it shows some version update error, after that I used "JobRepositoryFactoryBean" which uses database for storing job states.
So I added JobRepositoryFactoryBean bean and other DB changes, but it never triggers the job.
bellow is a snippet of log
2015-02-10 19:14:45 INFO context.support.XmlWebApplicationContext - Bean 'jobRegistry' of type [class org.springframework.batch.core.configuration.support.MapJobRegistry] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2015-02-10 19:14:45 INFO jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver
2015-02-10 19:14:45 INFO launch.support.SimpleJobLauncher - No TaskExecutor has been set, defaulting to synchronous executor.
2015-02-10 19:14:46 INFO context.support.DefaultLifecycleProcessor - Starting beans in phase 2147483647
2015-02-10 19:14:46 INFO scheduling.quartz.SchedulerFactoryBean - Starting Quartz Scheduler now
2015-02-10 19:14:46 INFO web.servlet.DispatcherServlet - FrameworkServlet 'mvc-dispatcher': initialization completed in 2155 ms
Here is my job configuration
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<bean
class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean"
p:dataSource-ref="dataSource" p:transactionManager-ref="transactionManager">
<property name="databaseType" value="reconConfig!{batch.databaseType}" />
<property name="isolationLevelForCreate" value="ISOLATION_DEFAULT" />
</bean>
<bean id="mapJobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"
lazy-init="true" autowire-candidate="false" />
<bean id="jobOperator"
class="org.springframework.batch.core.launch.support.SimpleJobOperator"
p:jobLauncher-ref="jobLauncher" p:jobExplorer-ref="jobExplorer"
p:jobRepository-ref="jobRepository" p:jobRegistry-ref="jobRegistry" />
<bean id="jobExplorer"
class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean"
p:dataSource-ref="dataSource" />
<bean id="jobRegistry"
class="org.springframework.batch.core.configuration.support.MapJobRegistry" />
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="appDataSource" />
</bean>
<bean class="org.springframework.batch.core.scope.StepScope" />
<bean id="reconConfigPlaceholderProperties"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="ignoreUnresolvablePlaceholders" value="true" />
<property name="location" value="classpath:batchDb.properties" />
<property name="placeholderPrefix" value="reconConfig!{" />
<property name="placeholderSuffix" value="}" />
</bean>
</beans>
It was running successfully , but after some more development it stopped working. I'm unable to figure out what exactly I changed in configuration which caused this.
Can any one please suggest the check points in using "JobRepositoryFactoryBean", If I'm missing or the problem is else where.

If this is your entire configuration for job scheduling, I believe you are missing the Cron scheduling part entirely...
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<bean id="cronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="jobDetail" />
<property name="cronExpression" value="*/10 * * * * ?" />
</bean>
</property>
</bean>
Please read through the spring doc and the quartz scheduling section here.

We have had the similar or the same problem. Look into DB repository. Repository is not resistant from different instances of application server (e.g. testing and development environment). It means, when two or more applications are connected into the same DB, you can have a problem. Applications begin to pull on the time and jobs. Unregistered jobs in one application are signed as ERROR and blocked and vice versa.
Two tables are important in this case.
Select XXX_SCHEDULER_STATE. Is there more than one row? Than there can be conflict. (Are you not able to distinguish your APP Server? If yes, you are connected into another DB than you suppose. It is very often but trivial problem.)
Select XXX_TRIGGERS.TRIGGER_STATE is there ERROR? If yes, try to change it from any SQL tool:
update TRIGGERS set TRIGGER_STATE = 'WATING' where TRIGGER_STATE = 'ERROR';
Restart application server. If you have a luck, the failed trigger started and work after restart. If not, try to shutdown concurrent App Server or change the repository.

Related

Spring Integration : retry configuration with multi-instances

I'm running 4 instances of Spring Boot Integration based apps on 4 differents servers.
The process is :
Read XML files one by one in a shared folder.
Process the file (check structure, content...), transform the data and send email.
Write a report about this file in another shared folder.
Delete successfully processed file.
I'm looking for a non-blocking and safe solution to process theses files.
Use cases :
If an instance crashes while reading or processing a file (so without ending the integration chain) : another instance must process the file or the same instance must process the file after it restarts.
If an instance is processing a file, the others instances must not process the file.
I have built this Spring Integration XML configuration file (it includes JDBC metadatastore with a shared H2 database) :
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-file="http://www.springframework.org/schema/integration/file"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration
http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/integration/file
http://www.springframework.org/schema/integration/file/spring-integration-file.xsd">
<int:poller default="true" fixed-rate="1000"/>
<int:channel id="inputFilesChannel">
<int:queue/>
</int:channel>
<!-- Input -->
<int-file:inbound-channel-adapter
id="inputFilesAdapter"
channel="inputFilesChannel"
directory="file:${input.files.path}"
ignore-hidden="true"
comparator="lastModifiedFileComparator"
filter="compositeFilter">
<int:poller fixed-rate="10000" max-messages-per-poll="1" task-executor="taskExecutor"/>
</int-file:inbound-channel-adapter>
<task:executor id="taskExecutor" pool-size="1"/>
<!-- Metadatastore -->
<bean id="jdbcDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="url" value="jdbc:h2:file:${database.path}/shared;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE;MVCC=TRUE"/>
<property name="driverClassName" value="org.h2.Driver"/>
<property name="username" value="${database.username}"/>
<property name="password" value="${database.password}"/>
<property name="maxIdle" value="4"/>
</bean>
<bean id="jdbcMetadataStore" class="org.springframework.integration.jdbc.metadata.JdbcMetadataStore">
<constructor-arg ref="jdbcDataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="jdbcDataSource"/>
</bean>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="jdbcMetadataStore"/>
<constructor-arg index="1" value="files"/>
</bean>
</list>
</constructor-arg>
</bean>
<!-- Workflow -->
<int:chain input-channel="inputFilesChannel" output-channel="outputFilesChannel">
<int:service-activator ref="fileActivator" method="fileRead"/>
<int:service-activator ref="fileActivator" method="fileProcess"/>
<int:service-activator ref="fileActivator" method="fileAudit"/>
</int:chain>
<bean id="lastModifiedFileComparator" class="org.apache.commons.io.comparator.LastModifiedFileComparator"/>
<int-file:outbound-channel-adapter
id="outputFilesChannel"
directory="file:${output.files.path}"
filename-generator-expression ="payload.name">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="headers[file_originalFile].delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:outbound-channel-adapter>
</beans>
Problem :
With multiple files, when 1 file is successfully processed, the transaction commit the others existing files in the metadatastore (table INT_METADATA_STORE). So if the app is restarted, the other files will never be processed
(it works fine if the app crashes when the first file is being processed).
It seems it only apply for reading files, not for processing files in an integration chain ... How to manage rollback transaction on JVM crash file by file ?
Any help is very appreciated. It's going to make me crazy :(
Thanks !
Edits / Notes :
Inspired from https://github.com/caoimhindenais/spring-integration-files/blob/master/src/main/resources/context.xml
I have updated my configuration with the answer from Artem Bilan. And remove the transactional block in the poller block : I had conflict of transactions between instances (ugly table locks exceptions). Although the behaviour was the same.
I have unsuccessfully tested this configuration in the poller block (same behaviour) :
<int:advice-chain>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="file*" timeout="30000" propagation="REQUIRED"/>
</tx:attributes>
</tx:advice>
</int:advice-chain>
Maybe a solution based on Idempotent Receiver Enterprise Integration Pattern could work. But I didn't manage to configure it... I don't find precise documentation.
You shouldn't use a PseudoTransactionManager, but DataSourceTransactionManager instead.
Since you use a JdbcMetadataStore, it is going to participate in the transaction and if downstream flow fails, the entry in the metadata store is going to be rolled back as well.
Ok. I found a working solution. Maybe not the cleanest one but it works :
Multi-instances on separate servers, sharing the same H2 database (network folder mount). I think it should work via remote TCP. MVCC has been activated on H2 (check its doc).
inbound-channel-adapter has scan-each-poll option activated to permit repolling files that could be previously ignored (if the process already begun by another instance). So, if another instance crashes, the file can be polled and processed again without restart for this very instance.
Option defaultAutoCommit is set to false on the DB.
I didn't use the FileSystemPersistentAcceptOnceFileListFilter because it was aggregating all read files in the metadatastore when one file get successfully processed. I didn't manage to use it in my context ...
I wrote my own conditions and actions in expressions through filter and transaction synchronization.
<!-- Input -->
<bean id="lastModifiedFileComparator" class="org.apache.commons.io.comparator.LastModifiedFileComparator"/>
<int-file:inbound-channel-adapter
id="inputAdapter"
channel="inputChannel"
directory="file:${input.files.path}"
comparator="lastModifiedFileComparator"
scan-each-poll="true">
<int:poller max-messages-per-poll="1" fixed-rate="5000">
<int:transactional transaction-manager="transactionManager" isolation="READ_COMMITTED" propagation="REQUIRED" timeout="60000" synchronization-factory="syncFactory"/>
</int:poller>
</int-file:inbound-channel-adapter>
<!-- Continue only if the concurrentmetadatastore doesn't contain the file. If if is not the case : insert it in the metadatastore -->
<int:filter input-channel="inputChannel" output-channel="processChannel" discard-channel="nullChannel" throw-exception-on-rejection="false" expression="#jdbcMetadataStore.putIfAbsent(headers[file_name], headers[timestamp]) == null"/>
<!-- Rollback by removing the file from the metadatastore -->
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback expression="#jdbcMetadataStore.remove(headers[file_name])" />
</int:transaction-synchronization-factory>
<!-- Metadatastore configuration -->
<bean id="jdbcDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="url" value="jdbc:h2:file:${database.path}/shared;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE;MVCC=TRUE"/>
<property name="driverClassName" value="org.h2.Driver"/>
<property name="username" value="${database.username}"/>
<property name="password" value="${database.password}"/>
<property name="maxIdle" value="4"/>
<property name="defaultAutoCommit" value="false"/>
</bean>
<bean id="jdbcMetadataStore" class="org.springframework.integration.jdbc.metadata.JdbcMetadataStore">
<constructor-arg ref="jdbcDataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="jdbcDataSource"/>
</bean>
<!-- Workflow -->
<int:chain input-channel="processChannel" output-channel="outputChannel">
<int:service-activator ref="fileActivator" method="fileRead"/>
<int:service-activator ref="fileActivator" method="fileProcess"/>
<int:service-activator ref="fileActivator" method="fileAudit"/>
</int:chain>
<!-- Output -->
<int-file:outbound-channel-adapter
id="outputChannel"
directory="file:${output.files.path}"
filename-generator-expression ="payload.name">
<!-- Delete the source file -->
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="headers[file_originalFile].delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:outbound-channel-adapter>
Any improvement or other solution is welcome.

Starting ActiveMQ from Tomcat application ,but using a shared datasource among instances

I have a successfully running ActiveMQ 5.9.1 , Camel 2.11 and Tomcat 7.0.50 service layer application with a dependency on ActiveMQ to be started independently.
The reason Im using ActiveMQ is to have a shared datastore among 2 same load balanced instances for faster processing.
Here is what I want to do :
To be able to start ActiveMQ from pom.xml or worst case scenario from context.xml. So, lets say 2 instances are load balanced and they start their own ActiveMQ servers but they point to a single data store(directory) for queue information.
Please advise how can I have such a design to sustain optimum performance in a production environment.
I'm still on the hunt for any psuedo code that I can try , have not succeeded yet .
Code snippet from camelContext.xml
<broker id="broker" brokerName="myBroker" useShutdownHook="false" useJmx="true" persistent="true" dataDirectory="activemq-data"
xmlns="http://activemq.apache.org/schema/core">
<transportConnectors>
<transportConnector name="tcp" uri="tcp://localhost:61616"/>
</transportConnectors>
</broker>
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://myBroker?create=false&waitForStart=5000" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
init-method="start" destroy-method="stop">
<property name="maxConnections" value="8" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="activeMQConfig"
class="org.apache.activemq.camel.component.ActiveMQConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory" />
<property name="concurrentConsumers" value="20" />
</bean>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="activeMQConfig" />
<property name="transacted" value="true" />
<property name="cacheLevelName" value="CACHE_CONSUMER" />
</bean>
Please help .
I resolved the issue finally . In case somebody else is facing the same problem , I downgraded the ActivemQ version to 5.8.0 to resolve the issue.

Quartz schedulers in two different clusters for the same application are firing at same time

i am using Quartz scheduler to pushing automatic emails daily specific time. My application was configured in two clusters. Schedulers in both the clusters are firing at same time and sending duplicate emails to users. Please suggest me the code to make sure that only one scheduler will fire.
I have done googling and found that JDBC-JobStore will resolve the issue.But i dont want to store schedule informaltion in db. Will RAMJobStore will resolve the issue? below is my existng code.
<bean id="scheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="cronTrigger" />
</list>
</property>
</bean>
<bean id="cronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="jobDetails" />
<property name="cronExpression" value="0 51 10 * * ?"/>
</bean>
<bean id="jobDetails"
class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="sendEmails" />
<property name="targetMethod" value="executeJob" />
<property name="concurrent" value="false" />
</bean>
<bean id="sendEmails" class="com.westin.agi.PushNotification"></bean>
Usually the RAM of each server is separated, so the behaviour that both cluster members have their scheduler fire at the same time is expected.
If you do not want to use a database for synchronization, you can use a memory grid solution like Hazelcast.
Actually there is a project to achieve exactly your use case with Hazelcast and Quartz:
https://github.com/mufumbo/quartz-hazelcast

Configuration whie using Quartz with spring

I am currently using quartz schedular which comes with spring framework.Our requirement is to schedule a method on daily basis which will call a webservice(only one method on the webservice).My configuration is as below.
<bean id="downloadJob"
class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="adapter" />
<property name="targetMethod" value="getData" />
</bean>
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="jobDetails">
<list>
<ref bean="downloadJob" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="cronTrigger" />
</list>
</property>
</bean>
<bean id="cronTrigger"
class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="downloadJob" />
<property name="cronExpression" value="" />
</bean>
I am reading the cronExpression value from properties file.
Please provide me some pointers to implement the schedular in better way. i have seen in some other projects where only using quartz with out spring.They are taking care of thread pool and some other properties as below.I am first time working on schedular implementation.Please provide me some suggestions/pointers on how to take care of these below properties while using quartz with spring(org.springframework.scheduling.quartz.SchedulerFactoryBean).Please suggest me if i need to take care of anyother things apart from these.
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 15
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
That's a fine way to implement a scheduler in Spring. The Spring reference has a whole section on Quartz integration that should help get you started. For setting Quartz properties, use the SchedulerFactoryBean's quartzProperties property. You'll have to decide yourself if there's anything else to take care of by reading up on Quartz in general and learning more about Quartz configuration.

Using a DAO on a Bean used by a Spring Scheduled Task

I'm developing a web application using Struts2 + Spring, and now I'm trying to add a scheduled task. I'm using Spring's task scheduling to do so. In my applicationContext I have:
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
...
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="MYSQL" />
</bean>
</property>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
And then I have my DAO that uses this entityManagerFactory:
<bean id="dao" class="data.GenericDAO" />
So this works flawlessly within the web application. But now I have a problem when creating the scheduled task:
<task:scheduled-tasks scheduler="notifier">
<task:scheduled ref="emailService" method="sendMail" fixed-rate="30000" />
</task:scheduled-tasks>
<task:scheduler id="notifier" pool-size="10" />
<bean id="emailService" class="services.emailService" >
<property name="dao" ref="dao" />
</bean>
This executes the method sendMail on my emailService class every 30 seconds. And my emailService has the DAO injected correctly. The thing is that I can fetch objects with my DAO using the findById named queries, but when I try to access any property mapped by Hibernate, such as related collections or entities, I get an "LazyInitializationException: could not initialize proxy - no Session ". I don't know what's wrong, since I believe the scheduled task is being managed by Spring, so it should have no problem using a Spring managed DAO. I must say that I'm using the openSessionInView filter on my struts actions, so maybe I need something similar for this scheduled task.
Any help or suggestion will be appreciated, thanks!
Edit: Finally I found a way to fix this. I changed my regular Dao with one where I can decide when to start and commit the transaction. So before doing anything I start a transaction and then everything works OK. So I still don't know exactly what causes the problem and if someday I'll be able to use my regular DAO, for the moment I'm staying with this solution.
OpenSessionInView won't help you, because you don't have a web context. You need Spring's Declarative Transaction Management.
In most cases, what you need to do is just this XML:
<!-- JPA, not hibernate -->
<bean id="myTxManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
<tx:annotation-driven transaction-manager="myTxManager" />
<!-- without backing interfaces you probably also need this: -->
<aop:config proxy-target-class="true">
(Annotate your EmailService class as #Transactional to enable this)

Categories