I'm using WebLogic 10.3.3 with Oracle 11g and face a weird problem with Spring Batch as soon as I'm switching from Spring ResourcelessTransactionManager (which is mainly for testing) to the productive DataSourceTransactionManager. First I used WebLogics default driver oracle.jdbc.xa.client.OracleXADataSource but this one fails because Spring can't set the isolation level - this is also documented here.
I'm fine with that since I don't need global transactions anyway so I switched to oracle.jdbc.driver.OracleDriver. Now I'm getting the error message
ORA-01453: SET TRANSACTION must be first statement of transaction
I don't find a lot of information on this, there was a bug but that should have been fixed in Oracle 7 long time ago. It looks like a transaction is started before (?) the actual job gets added to the JobRepository and is not closed properly or something like that.
JobI was able to solve this by setting the Isolation level for all transactions to READ_COMMITTED. By default, Spring sets that to SERIALIZABLE which is very strict (but perfectly fine). This didn't work on my machine although Oracle should support it:
http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html
Here's my code - first for the configuration:
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
<property name="dataSource" ref="dataSource" />
<property name="isolationLevelForCreate" value="ISOLATION_READ_COMMITTED" />
</bean>
...and this is for the job itself (simplified):
public class MyFancyBatchJob {
#Transactional(isolation=Isolation.READ_COMMITTED)
public void addJob() {
JobParameters params = new JobParametersBuilder().toJobParameters();
Job job = jobRegistry.getJob("myFancyJob");
JobExecution execution = jobLauncher.run(job, params);
}
}
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" >
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"></property>
<property name="url" value="jdbc:oracle:thin:<username>/<password>#<host>:1521:<sid>" />
</bean>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-oracle10g.sql" />
<jdbc:script location="org/springframework/batch/core/schema-oracle10g.sql" />
</jdbc:initialize-database>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseType" value="oracle" />
<property name="tablePrefix" value="BATCH_"/>
<property name="isolationLevelForCreate" value="ISOLATION_DEFAULT"/>
</bean>
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
/*for spring batch with oracle 10g and 11g
*/
Related
I am having some trouble configuring Spring to use BATCH_* tables hosted by MySQL.
I created the tables ok according to docs however it looks like the code is trying to get a sequence number using the Oracle flavour function.
The error I get is:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown table 'BATCH_JOB_SEQ' in field list
But this is hiding the real problem. I debugged it and its trying to run this code:
select " + getIncrementerName() + ".nextval from dual";
Which is obviously Oracle dialect. I notice that there exists the correct incrementer in my environment here:
org.springframework.jdbc.support.incrementer.MySQLMaxValueIncrementer()
but its calling
org.springframework.jdbc.support.incrementer.OracleMaxValueIncrementer()
I have setup my data source thus:
<bean id="springDataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://10.252.205.5:3306/MASKNG" />
<property name="username" value="MASKNG" />
<property name="password" value="maskng" />
</bean>
Anyone have an ideas as this is a show stopper for us atm
Well, well, I really should RTM a little more...you just have to tell the jobRepository bean what type of DB you are using
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="springDataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="validateTransactionState" value="${jobRepository.validationTransactionState:true}" />
<property name="isolationLevelForCreate" value="${jobRepository.isolationLevelForCreate}" />
<!-- <property name="databaseType" value="oracle" /> -->
<property name="databaseType" value="mysql" />
<property name="tablePrefix" value="BATCH_" />
<property name="lobHandler" ref="lobHandler"/>
</bean>
I currently have a Spring Integration-JDBC implementation up and running that polls a db table for records and then sends valid records to be processed by Spring Batch. I'm in the process of adding an additional table monitor to the project, and an additional batch job, but I'm uncertain what nuts and bolts of Batch need to be unique to the other task, and what can/should be reused?
Spring Batch Job Setup:
<bean id="jobOperator" class="org.springframework.batch.core.launch.support.SimpleJobOperator">
<property name="jobExplorer">
<bean class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean">
<property name="dataSource" ref="dataSource" />
</bean>
</property>
<property name="jobRepository" ref="jobRepository" />
<property name="jobRegistry" ref="jobRegistry" />
<property name="jobLauncher" ref="jobLauncher" />
</bean>
<bean id="jobRegistry" class="org.springframework.batch.core.configuration.support.MapJobRegistry"/>
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager"/>
</bean>
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
Should I be making a jobOperator2, JobLauncher2, ... for all of these?
No, those are fine and should be as single instance per application and per job store.
All you need a new job definition for that new task.
See more information in the Reference Manual.
I have a problem using mappers in mybatis-spring. (Spring Batch)
I need to use a SqlSessionTemplate with ExecutorType in BATCH mode for performance issues (my program must execute thousands of insert statement in a table).
However in my program I need to log errors and updating states in another table of the database and if something goes wrong in the execution of the current step everything is rollback, included the logs, which is not an acceptable behaviour.
I thought I could simple set two different SqlSessionTemplate with differents ExecutorType, but if in my step I use two mappers with different templates I get an exception which says that I can't change ExecutorType during transaction, but I don't know how to solve this issue.
Any help is appreciated. Here some XML configuration.
<!-- connect to database -->
<bean id="dataSource" class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy">
<property name="targetDataSource">
<ref local="mainDataSource" />
</property>
</bean>
<bean id="mainDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" >
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.url}" />
<property name="username" value="${db.user}" />
<property name="password" value="${db.pass}" />
</bean>
<bean id="infrastructureSqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="mapperLocations"
value="classpath*:com/generali/danni/sipo/mdv/dao/mybatis/*Mapper*.xml" />
<property name="configLocation" value="classpath:mybatis-config.xml" />
</bean>
<bean id="infrastructureSqlSessionTemplateBatch" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="infrastructureSqlSessionFactory" />
<constructor-arg index="1" value="BATCH" />
</bean>
<bean id="infrastructureSqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="infrastructureSqlSessionFactory" />
</bean>
<bean id="infrastructureAbstractMapper" class="org.mybatis.spring.mapper.MapperFactoryBean"
abstract="true">
<property name="sqlSessionTemplate" ref="infrastructureSqlSessionTemplate" />
</bean>
<bean id="infrastructureAbstractMapperBatch" class="org.mybatis.spring.mapper.MapperFactoryBean"
abstract="true">
<property name="sqlSessionTemplate" ref="infrastructureSqlSessionTemplateBatch" />
</bean>
<bean id="erroriMapper" parent="infrastructureAbstractMapper">
<property name="mapperInterface"
value="com.mdv.dao.ErroriMapper" />
</bean>
<bean id="stagingFileMapper" parent="infrastructureAbstractMapperBatch">
<property name="mapperInterface"
value="com.mdv.dao.StagingFileMapper" />
</bean>
Here i have two mappers, one I'd like to use in BATCH mode, the other in SIMPLE mode.
How can I accomplish this task? Every suggestion is appreciated.
Thanks in advance, and sorry for my bad english.
After a lot of tries, I decided to change my approach to solve this problem.
I defined programmatically a new SqlSessionFactory, generating a new SqlSession with the Batch Executor and I used that one.
Since it is an entirely different SqlSessionFactory, it seems it doesn't give problem if I use 2 differents ExecutorType.
Here a sample working code:
Environment environment = new Environment("TEST", new JdbcTransactionFactory(), dataSource);
Configuration configuration = new Configuration(environment);
configuration.addMappers("com.mdv.dao");
SqlSessionFactory ssf = new SqlSessionFactoryBuilder().build(configuration);
SqlSession sqlSession = ssf.openSession(ExecutorType.BATCH);
try {
StagingFileMapper sfm = sqlSession.getMapper(StagingFileMapper.class);
for(Record r : staging){
StagingFile sf = new StagingFile();
//set your sf fields
sfm.insert(sf);
}
sqlSession.commit();
} catch (Exception e) {
//manage exception
}
finally{
sqlSession.close();
}
I am using Spring and trying to setup a global transaction spanning over two MS SQL Server DBs. The app is running inside Tomcat 6.
I have these definitions:
<bean id="dataSource1" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
....
</bean>
<bean id="sessionFactory1"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource1"/>
....
</bean>
<bean id="hibernateTransactionManager1"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory">
<ref local="sessionFactory1"/>
</property>
</bean>
<bean id="dataSource2" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
....
</bean>
<bean id="sessionFactory2"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource2"/>
....
</bean>
<bean id="hibernateTransactionManager2"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory">
<ref local="sessionFactory2"/>
</property>
</bean>
Then also, each DAO is linked either to sessionFactory1 or to sessionFactory2.
<bean name="stateHibernateDao" class="com.project.dao.StateHibernateDao">
<property name="sessionFactory" ref="sessionFactory1"/>
</bean>
Also, I recently added these two.
<bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close">
<property name="forceShutdown" value="false" />
<property name="transactionTimeout" value="300" />
</bean>
<bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
<property name="transactionTimeout" value="300" />
</bean>
I am trying to programmatically manage the global transaction
(this is some old legacy code and I don't want to change it too
much so I prefer keeping this managed programmatically).
So now I have this UserTransaction ut (injected from Spring), so I call ut.begin(), do some DB/DAO operations to the two DBs through the DAOs, then I call ut.commit().
The thing is that even before the ut.commit() call, I can see the data is already committed to the DBs?!
I don't think Atomikos is aware of my two DBs, their data sources, session factories, etc. I don't think it starts any transactions on them. Looks like they are not enlisted at all in the global transaction.
To me it seems that each DB/DAO operation goes to the SQL Server on its own, so SQL Server creates an implicit transaction for just that DAO/DB operation, applies the operation and commits the implicit the transaction.
But 1) and 2) are just guesses of mine.
My questions:
Do I need to start the two DB transactions myself (but OK, this is what I am currently doing and I am trying to get rid of; that's why I am trying to use Atomikos to start with)?
How I can configure all this correctly so that when I call ut.begin() it begins a global transaction to the two DBs and when I call ut.commit() it commits it?
I haven't played with JTA recently so seems to me I am missing something quite basic here. What is it?
Edit 1
<bean id="globalTransactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction" ref="atomikosUserTransaction"/>
<property name="transactionManager" ref="atomikosTransactionManager" />
<property name="allowCustomIsolationLevels" value="true" />
<property name="transactionSynchronization" value="2" />
</bean>
I'm trying to add one more database/schema/persistenceUnit in my project and I'm receiving the error:
No unique bean of type [javax.persistence.EntityManagerFactory] is defined: expected single bean but found 2
I google/api allot and could not found why spring is complaining about my configuration.
Here is part of my applicationContext.xml
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="persistenceUnitName" value="transactionManager" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="${show.hibernate.sql}" />
<property name="generateDdl" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.MySQL5Dialect" />
</bean>
</property>
</bean>
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${database.driver}" />
<property name="url" ...
<property name="testOnBorrow" value="true" />
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactoryREST" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSourceREST" />
<property name="persistenceUnitName" value="REST" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="${show.hibernate.sql}" />
<property name="generateDdl" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.MySQL5Dialect" />
</bean>
</property>
</bean>
<bean id="dataSourceREST" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${database.driver}" />
...
<property name="testOnBorrow" value="true" />
</bean>
<bean id="transactionManagerREST" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactoryREST" />
</bean>
<tx:annotation-driven transaction-manager="REST"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
Some questions:
Do I need to have two tx:annotation-driven ?
Do I need to specify persistenceUnitName in the factory ?
I'm putting some notes of my digg in spring forum (LINK)
Well thats it... any help will be glad!
With Spring, you need to have only one EntityManagerFactory.
What you are looking for is describe in the Spring documentation at the chapiter 13.5.1.4 : "Deals with multiple persitence units"
I copy/paste the text :
"13.5.1.4 Dealing with multiple persistence units
For applications that rely on multiple persistence units locations, stored in various JARS in the classpath, for example, Spring offers the PersistenceUnitManager to act as a central repository and to avoid the persistence units discovery process, which can be expensive. The default implementation allows multiple locations to be specified that are parsed and later retrieved through the persistence unit name. (By default, the classpath is searched for META-INF/persistence.xml files.)
<bean id="pum" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager">
<property name="persistenceXmlLocations">
<list>
<value>org/springframework/orm/jpa/domain/persistence-multi.xml</value>
<value>classpath:/my/package/**/custom-persistence.xml</value>
<value>classpath*:META-INF/persistence.xml</value>
</list>
</property>
<property name="dataSources">
<map>
<entry key="localDataSource" value-ref="local-db"/>
<entry key="remoteDataSource" value-ref="remote-db"/>
</map>
</property>
<!-- if no datasource is specified, use this one -->
<property name="defaultDataSource" ref="remoteDataSource"/>
</bean>
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitManager" ref="pum"/>
<property name="persistenceUnitName" value="myCustomUnit"/>
</bean>
The default implementation allows customization of the PersistenceUnitInfo instances, before they are fed to the JPA provider, declaratively through its properties, which affect all hosted units, or programmatically, through the PersistenceUnitPostProcessor, which allows persistence unit selection. If no PersistenceUnitManager is specified, one is created and used internally by LocalContainerEntityManagerFactoryBean."
This exceptions means that you are trying to autowire EntityManagerFactory by type. Do you have any #Autowired annotation in your code?
Aslo, when using #PersistenceContext, set the unit attribute correctly. And (I'm not sure if this is a proper thing to do) - try setting the name attribute to your respective factory name.
Also, check if you haven't copy-pasted incorrectly the REST transaction manager - now there is no such bean REST
Ensure all of your #PersistenceContext specify unitName. I haven't figured out how to tell Spring that a particular EMF or PersistenceUnit is the default. I thought specifying primary="true" on the default EMF would work but doesn't appear to
Do I need to specify persistenceUnitName in the factory ?
If you've got multiple persistence units, you do need to specify which ones the factories will use.
More to the heart of the matter, see SPR-3955. To summarize, versions prior to Spring 3.0M4 do not support multiple transaction managers with #Transactional. Nor do I believe it honors the "unitName" attribute for #PersistenceContext, so you can't specify that either.
For an example of how I worked around this by explicitly injecting EntityManagerFactorys and using AOP to re-enable #Transactional, see my sample app