I'm using spring with Quartz and every thing is working fine but some previously cofigured triggers also got executed because they are stored in Quartz tables.
Manually we can delete all unconfigured triggers and execute the application but that is not a good practice.
I want to remove all the triggers through a spring+quartz property or some other solution.
When I have configured 3 triggers in spring configuration file like
<property name="triggers">
<list>
<ref bean="FirstTrigger" />
<ref bean="secondTrigger" />
<ref bean="ThirdTrigger"/>
</list>
</property>
When server started, all the triggers stored in Quartz tables with corresponding cron triggers and job details.
If i remove any of the triggers in my configuration, in above for example I removed second Trigger, but it wasn't removed from Quartz tables.
At that time DBtrigger (removed trigger) also executed.
In spring + Quartz integration, is there any property to handle this problem or do we need to do something else for this problem?
Thanks in advance.
If you store triggers in DB (suppose your triggers are cron-based), you can simply delete records like this:
DELETE FROM QRTZ_CRON_TRIGGERS WHERE SCHED_NAME='scheduler' and TRIGGER_NAME='myTrigger' and TRIGGER_GROUP='DEFAULT';
DELETE FROM QRTZ_TRIGGERS WHERE SCHED_NAME='scheduler' and TRIGGER_NAME='myTrigger' and TRIGGER_GROUP='DEFAULT';
You may also consider looking around other Quartz DB tables to find leftovers related to your job.
You can access the Quartz Scheduler, Jobs, Triggers, etc. using the Quartz API.
Have a look at this Quartz CookBook, you will find out how to list all the triggers defined, etc. Maybe you should remove the unnecessary triggers using this API.
Related
I am writing a spring batch to read file from shared drive and load data into shared DB. This batch will be deployed/executed from 2 nodes(servers). I want to make sure the file is read by only one server and load data.
I am not finding anything concrete on internet. I have couple of ideas to handle this as mentioned below.
1. Use FileChannel tryLock to get a lock on file and move the file after reading it.
2. Maintain a table in shared DB to maintain a record say "fileReadJobExcution" with status as NULL initially. when batch application runs it will lookup into this table to get record with status null and try to update status as IN_PROGRESS. So whichever node(server) get updateCount > 0 will be allowed to read file from shared location and after successful that batch updates status back to NULL.
I am looking for if something already available in either in Spring batch or JAVA to handle multi node synchronization to a shared server.
Please help with suggestions.
It sounds like you could either use remote chunking or partitioning to achieve your objective. From what you've described, I think partitioning would work best.
You could create a master Step to pull in your list of files, and then delegate the processing of those files to slave Step objects - either remotely or locally on different threads - passing the file name via the ExecutionContext.
The Spring Batch Samples GitHub project has some great examples, and I think you may find the partitionFileJob.xml particularly helpful.
In particular, review the following Bean definitions from the sample project:
<job id="partitionJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step">
<partition step="step1" partitioner="partitioner">
<handler grid-size="2" task-executor="taskExecutor" />
</partition>
</step>
</job>
<bean id="partitioner" class="org.springframework.batch.core.partition.support.MultiResourcePartitioner">
<property name="resources" value="classpath:data/iosample/input/delimited*.csv" />
</bean>
<bean id="itemReader" scope="step" autowire-candidate="false" parent="itemReaderParent">
<property name="resource" value="#{stepExecutionContext[fileName]}" />
</bean>
In Spring, beans can be configured to be lazily initialized. Spring Batch jobs are also (Spring-managed) beans. That is, when I configure something like
<sb:job id="dummyJob" job-repository="jobRepository">
<sb:step id="dummyStep">
<sb:tasklet ref="dummyTasklet" />
</sb:step>
</sb:job>
I actually configure a new (Job-typed) bean inside the Spring container.
My issue is I really want my Job beans to be lazily initialized. As they are regular Spring-managed beans, I'd expect I can instruct the Spring context to make them lazy. This is because I have a large number of beans and there are many cases in which, during one execution of my Spring-based application, I only run one job.
But there's no lazy-init property I can set on my <sb:job... \> configuration. Is there any way I can force lazy initialization? If I configure my <beans\> root with default-lazy-init="true", will this also apply to the Job beans?
You have two options here:
Configure your job manually. This would allow you to use the regular lazy-init attributes Spring exposes.
Use the JobScope now available in Spring Batch 3. Spring Batch 3 will be available soon, but the JobScope was available in the last milestone.
Just to elaborate on Michael Minella's answer.
I had a similar requirement to lazy initialize the job repository.
I am working with Spring Batch 2.1.9.
The following is working for me.
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean"
lazy-init="true">
<property name="dataSource" ref="jobDataSource"/>
<property name="transactionManager" ref="jobTransactionManager"/>
</bean>
Note one pitfall I had run into: do not set the databaseType i.e. avoid the following:
<property name="databaseType" value="SQLSERVER"/>
This is bad because it disable the auto-discovery of the database type and breaked my JUnits that works on H2.
I have following configuration in a quartz.properties
org.quartz.scheduler.instanceId=AUTO
org.quartz.scheduler.instanceName=JobCluster
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.dataSource=myDataSource
org.quartz.dataSource.chargebackDataSource.jndiURL=jdbc/myDataSource
org.quartz.jobStore.isClustered=true
org.quartz.threadPool.threadCount=5
Spring configuration looks like this:
<bean id="quartz2" class="org.apache.camel.component.quartz2.QuartzComponent">
<property name="propertiesFile" value="quartz.properties"/>
</bean>
<route>
<from uri="quartz2://myTrigger?job.name=myJob&job.durability=true&stateful=true&trigger.repeatInterval=60000&trigger.repeatCount=-1"/>
<to uri="bean:myBean?method=retrieve"/>
....
On application shut down the Quartz trigger state changed to PAUSED and after the next start never changed to WAITING again so never fired again.
Is it possible to configure quartz/camel somehow to resume trigger after the application restart?
Camel version is 2.12.0.
Spring version 3.2.4.RELEASE
Actually such behavior contradicts with theit statement at guideline:
If you use Quartz in clustered mode, e.g. the JobStore is clustered. Then the Quartz2 component will not pause/remove triggers when a node is being stopped/shutdown. This allows the trigger to keep running on the other nodes in the cluster.
If you want to dynamic suspend/resume routes as the
org.apache.camel.impl.ThrottlingRoutePolicy
does then its advised to use org.apache.camel.SuspendableService as it allows for fine grained suspend and resume operations. And use the org.apache.camel.util.ServiceHelper to aid when invoking these operations as it support fallback for regular org.apache.camel.Service instances.
For more details please refer RoutePolicy and Quartz Component
Hope this migh help
I have 2 different jobs (actually more but for simplicity assume 2). Each job can run in parallel with the other job, but each instance of the same job should be run sequentially (otherwise the instances will cannibalize eachother's resources).
Basically I want each of these jobs to have it's own queue of job instances. I figured I could do this using two different thread pooled job launchers (each with 1 thread) and associating a job launcher with each job.
Is there a way to do this that will be respected when launching jobs from the Spring Batch Admin web UI?
There is a way to specify a specific job launcher for a specific job, but the only way I have found to do it is through use of a JobStep.
If you have a job called "specificJob" this will create another job "queueSpecificJob" so when you launch it, either through Quartz or Spring Batch web admin, it will queue up a "specificJob" execution.
<bean id="specificJobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository"/>
<property name="taskExecutor">
<task:executor id="singleThreadPoolExecutor" pool-size="1"/>
</property>
</bean>
<job id="queueSpecificJob">
<step id="specificJobStep">
<job ref="specificJob" job-launcher="specificJobLauncher" job-parameters-extractor="parametersExtractor" />
</step>
</job>
# ahbutfore
How are the jobs triggered? Do you use Quartz trigger by any chance?
If yes, would implementing/extending the org.quartz.StatefulJob interface in all your jobs do the work for you?
See Spring beans configuration here : https://github.com/regunathb/Trooper/blob/master/examples/example-batch/src/main/resources/external/shellTaskletsJob/spring-batch-config.xml. Check source code of org.trpr.platform.batch.impl.spring.job.BatchJob
You can do more complex serialization (including across Spring batch nodes) using a suitable "Leader Election" implementation. I have used Netflix Curator (an Apache Zookeeper recipe) in my project. Some pointers here : https://github.com/regunathb/Trooper/wiki/Useful-Batch-Libraries
Using a shell script you can launch different jobs parallel.
Add an '&' to the end of each command line. The shell will execute them in parallel with it's own execution.
After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources. But:
What if I don't want a distributed
transaction? What would I do if I want to
start transactions on two different
databases at the same time, but
commit the transaction on one database
and roll back the transaction on
the other?
I'm wondering how my code
actually initiated a distributed
transaction. It looks to me like I'm
starting completely separate
transactions on each of the
databases.
Info about the application:
The application is an EJB running on a Sun Java Application Server 9.1
I use something like the following spring context to set up the hibernate session factories:
<bean id="dbADatasource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/dbA"/>
</bean>
<bean id="dbASessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dbADatasource" />
<property name="hibernateProperties">
hibernate.dialect=org.hibernate.dialect.Oracle9Dialect
hibernate.default_schema=schemaA
</property>
<property name="mappingResources">
[mapping resources...]
</property>
</bean>
<bean id="dbBDatasource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/dbB"/>
</bean>
<bean id="dbBSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dbBDatasource" />
<property name="hibernateProperties">
hibernate.dialect=org.hibernate.dialect.Oracle9Dialect
hibernate.default_schema=schemaB
</property>
<property name="mappingResources">
[mapping resources...]
</property>
</bean>
Both of the JNDI resources are javax.sql.ConnectionPoolDatasoure's. They actually both point to the same connection pool, but we have two different JNDI resources because there's the possibility that the two, completely separate, groups of tables will move to different databases in the future.
Then in code, I do:
sessionA = dbASessionFactory.openSession();
sessionB = dbBSessionFactory.openSession();
sessionA.beginTransaction();
sessionB.beginTransaction();
The sessionB.beginTransaction() line produces the error in the title of this post - sometimes. I ran the app on two different sun application servers. On one runs it fine, the other throws the error. I don't see any difference in how the two servers are configured although they do connect to different, but equivalent databases.
So the question is
Why doesn't the above code start
completely independent transactions?
How can I force it to start
independent transactions rather than
a distributed transaction?
What configuration could cause the difference in
behavior between the two application
servers?
Thanks.
P.S. the stack trace is:
Local transaction already has 1 non-XA Resource: cannot add more resources.
at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.enlistResource(J2EETransactionManagerOpt.java:124)
at com.sun.enterprise.resource.ResourceManagerImpl.registerResource(ResourceManagerImpl.java:144)
at com.sun.enterprise.resource.ResourceManagerImpl.enlistResource(ResourceManagerImpl.java:102)
at com.sun.enterprise.resource.PoolManagerImpl.getResource(PoolManagerImpl.java:216)
at com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:327)
at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:189)
at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:165)
at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:158)
at com.sun.gjc.spi.base.DataSource.getConnection(DataSource.java:108)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:82)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167)
at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142)
at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85)
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1354)
at [application code ...]
1 Why doesn't the above code start completely independent transactions?
The app. server manages the transaction for you which can, if necessary, be a distributed transaction. It enlists all the participants automatically. When there's only one participant, you don't notice any difference with a plain JDBC transaction, but if there are more than one, a distributed transaction is really needed, hence the error.
2 How can I force it to start independent transactions rather than a
distributed transaction?
You can configure the datasource to be XA or Local. The transactional behavior of Spring/Hibernate can also be configured to use either regular JDBC transactions or delegate the management of transactions to the JTA distributed transaction manager.
I suggest you switch the datasource to non-XA and try to configure Spring/Hibernate to use the JDBC transactions. You should find the relevant information in the documentation, here what I suspect is the line to change:
<bean id="txManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager" />
This should essentially means that you are not using the app. server distributed transaction manager.
3 What configuration could cause the difference in behavior between the
two application servers?
If you have really exactly the same app and configuration, this means that in one case only one participant is enlisted in the dist. transaction, while there are two in the 2nd case. One participant corresponds to one physical connection to a database usually. Could it be that in one case, you use two schema on two different databases, while in the 2nd case you use two schema on the same physical database? A more probable explanation would be that the datasource were configured differently on the two app. server.
PS: If you use JTA distributed transactions, you should use UserTransaction.{begin,commit,rollback} rather than their equivalent on the Session.
After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources.
No, not all, all except one (as the exception is saying) if your application server supports Logging Last Resource (LLR) optimization (which allows to enlist one non-XA resource in a global transaction).
Why doesn't the above code start completely independent transactions?
Because you aren't. When using beginTransaction() behind EJB Session Beans, Hibernate will join the JTA transaction (refer to the documentation for full details). So the first call just works but the second call means enlisting another transactional resource in the current transaction. And since none of your resources are XA, you get an exception.
How can I force it to start independent transactions rather than a distributed transaction?
See #ewernli answer.
What configuration could cause the difference in behavior between the two application servers?
No idea. Maybe one of them is using at least one XA datasource.