I am using Spring and trying to setup a global transaction spanning over two MS SQL Server DBs. The app is running inside Tomcat 6.
I have these definitions:
<bean id="dataSource1" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
....
</bean>
<bean id="sessionFactory1"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource1"/>
....
</bean>
<bean id="hibernateTransactionManager1"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory">
<ref local="sessionFactory1"/>
</property>
</bean>
<bean id="dataSource2" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
....
</bean>
<bean id="sessionFactory2"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource2"/>
....
</bean>
<bean id="hibernateTransactionManager2"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory">
<ref local="sessionFactory2"/>
</property>
</bean>
Then also, each DAO is linked either to sessionFactory1 or to sessionFactory2.
<bean name="stateHibernateDao" class="com.project.dao.StateHibernateDao">
<property name="sessionFactory" ref="sessionFactory1"/>
</bean>
Also, I recently added these two.
<bean id="atomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close">
<property name="forceShutdown" value="false" />
<property name="transactionTimeout" value="300" />
</bean>
<bean id="atomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
<property name="transactionTimeout" value="300" />
</bean>
I am trying to programmatically manage the global transaction
(this is some old legacy code and I don't want to change it too
much so I prefer keeping this managed programmatically).
So now I have this UserTransaction ut (injected from Spring), so I call ut.begin(), do some DB/DAO operations to the two DBs through the DAOs, then I call ut.commit().
The thing is that even before the ut.commit() call, I can see the data is already committed to the DBs?!
I don't think Atomikos is aware of my two DBs, their data sources, session factories, etc. I don't think it starts any transactions on them. Looks like they are not enlisted at all in the global transaction.
To me it seems that each DB/DAO operation goes to the SQL Server on its own, so SQL Server creates an implicit transaction for just that DAO/DB operation, applies the operation and commits the implicit the transaction.
But 1) and 2) are just guesses of mine.
My questions:
Do I need to start the two DB transactions myself (but OK, this is what I am currently doing and I am trying to get rid of; that's why I am trying to use Atomikos to start with)?
How I can configure all this correctly so that when I call ut.begin() it begins a global transaction to the two DBs and when I call ut.commit() it commits it?
I haven't played with JTA recently so seems to me I am missing something quite basic here. What is it?
Edit 1
<bean id="globalTransactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction" ref="atomikosUserTransaction"/>
<property name="transactionManager" ref="atomikosTransactionManager" />
<property name="allowCustomIsolationLevels" value="true" />
<property name="transactionSynchronization" value="2" />
</bean>
Related
I'm using WebLogic 10.3.3 with Oracle 11g and face a weird problem with Spring Batch as soon as I'm switching from Spring ResourcelessTransactionManager (which is mainly for testing) to the productive DataSourceTransactionManager. First I used WebLogics default driver oracle.jdbc.xa.client.OracleXADataSource but this one fails because Spring can't set the isolation level - this is also documented here.
I'm fine with that since I don't need global transactions anyway so I switched to oracle.jdbc.driver.OracleDriver. Now I'm getting the error message
ORA-01453: SET TRANSACTION must be first statement of transaction
I don't find a lot of information on this, there was a bug but that should have been fixed in Oracle 7 long time ago. It looks like a transaction is started before (?) the actual job gets added to the JobRepository and is not closed properly or something like that.
JobI was able to solve this by setting the Isolation level for all transactions to READ_COMMITTED. By default, Spring sets that to SERIALIZABLE which is very strict (but perfectly fine). This didn't work on my machine although Oracle should support it:
http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html
Here's my code - first for the configuration:
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
<property name="dataSource" ref="dataSource" />
<property name="isolationLevelForCreate" value="ISOLATION_READ_COMMITTED" />
</bean>
...and this is for the job itself (simplified):
public class MyFancyBatchJob {
#Transactional(isolation=Isolation.READ_COMMITTED)
public void addJob() {
JobParameters params = new JobParametersBuilder().toJobParameters();
Job job = jobRegistry.getJob("myFancyJob");
JobExecution execution = jobLauncher.run(job, params);
}
}
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" >
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"></property>
<property name="url" value="jdbc:oracle:thin:<username>/<password>#<host>:1521:<sid>" />
</bean>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-oracle10g.sql" />
<jdbc:script location="org/springframework/batch/core/schema-oracle10g.sql" />
</jdbc:initialize-database>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseType" value="oracle" />
<property name="tablePrefix" value="BATCH_"/>
<property name="isolationLevelForCreate" value="ISOLATION_DEFAULT"/>
</bean>
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
/*for spring batch with oracle 10g and 11g
*/
I am using Hibernate 4, Spring 3, JSF 2.0 and Weblogic 10.3.6 as server.
I have created datasource on Weblogic server and in applicationContext.xml I have defined datasource as
<!-- Data Source Declaration -->
<bean id="DataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/myDS"/>
</bean>
If I would want to use the P6Spy for logging SQL parameters, how can and where I should add the following in applicationcontext.xml?
<property name="hibernate.connection.driver_class">com.p6spy.engine.spy.
P6SpyDriver</property>
Any help is highly appreciable.
Thanks
The easiest way to integrate p6spy using spring is to use the P6DataSource class. The P6DataSource class is just a proxy for the real data source. This lets you obtain the real data source using any of the spring data source factory implementations.
<bean id="dataSource" class="com.p6spy.engine.spy.P6DataSource">
<constructor-arg>
<bean id="DataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/myDS"/>
</bean>
</constructor-arg>
</bean>
If you are using an XADatasource, just change the classname to P6ConnectionPoolDataSource as shown below. Note: P6ConnectionPoolDataSource implements the ConnectionPoolDataSource and XADataSource interfaces.
<bean id="dataSource" class="com.p6spy.engine.spy.P6ConnectionPoolDataSource">
<constructor-arg>
<bean id="DataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/myDS"/>
</bean>
</constructor-arg>
</bean>
You need to create bean of session factory in applicationContext.xml file as follows:
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource" />
</property>
</bean>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.p6spy.engine.spy.
P6SpyDriver" />
<property name="url" value="jdbc\:mysql\://localhost\:3306/testdb" />
<property name="username" value="my_username" />
<property name="password" value="my_password" />
</bean>
Please refer to: http://www.mkyong.com/hibernate/how-to-display-hibernate-sql-parameter-values-solution/ for more about P6Spy library.
We can omit "dataSource" bean and directly write properties. Ref: how to configure hibernate config file for sql server
I am using spring Routing data source as explained here and things works well. Now, I want to add connection pooling (Apache DBCP). I changed the basic data source to the connection pool data source. well, It does not work.
On the server start-up I see that connection pooling is happening and I can debug Apache's code, but then, when I am trying to access the DB through my code, I go to the routing data source, and from there to the DriverManager class to get a connection - completely ignoring Apache's code.
<bean id="catalogDataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="${jdbc.driverClass}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.user}" />
<property name="password" value="${jdbc.pwd}" />
</bean>
<bean id="pool" class="org.apache.commons.pool.impl.GenericObjectPool">
<property name="minEvictableIdleTimeMillis"><value>300000</value></property>
<property name="timeBetweenEvictionRunsMillis"><value>60000</value></property>
</bean>
<bean id="dsConnectionFactory" class="org.apache.commons.dbcp.DataSourceConnectionFactory">
<constructor-arg><ref bean="catalogDataSource"/></constructor-arg>
</bean>
<bean id="poolableConnectionFactory" class="org.apache.commons.dbcp.PoolableConnectionFactory">
<constructor-arg index="0"><ref bean="dsConnectionFactory"/></constructor-arg>
<constructor-arg index="1"><ref bean="pool"/></constructor-arg>
<constructor-arg index="2"><null/></constructor-arg>
<constructor-arg index="3"><null/></constructor-arg>
<constructor-arg index="4"><value>false</value></constructor-arg>
<constructor-arg index="5"><value>true</value></constructor-arg>
</bean>
<bean id="pooledDS" class="org.apache.commons.dbcp.PoolingDataSource" depends-on="poolableConnectionFactory">
<constructor-arg><ref bean="pool"/></constructor-arg>
</bean>
<bean id="routingDataSource" class="something that derived from RoutingDataSource">
<property name="defaultTargetDataSource" ref="pooledDS"/>
<property name="targetDataSources">
<map key-type="java.lang.Integer">
</map>
</property>
</bean>
Can you help me please - what did I do wrong?
I'm trying to add one more database/schema/persistenceUnit in my project and I'm receiving the error:
No unique bean of type [javax.persistence.EntityManagerFactory] is defined: expected single bean but found 2
I google/api allot and could not found why spring is complaining about my configuration.
Here is part of my applicationContext.xml
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="persistenceUnitName" value="transactionManager" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="${show.hibernate.sql}" />
<property name="generateDdl" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.MySQL5Dialect" />
</bean>
</property>
</bean>
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${database.driver}" />
<property name="url" ...
<property name="testOnBorrow" value="true" />
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactoryREST" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSourceREST" />
<property name="persistenceUnitName" value="REST" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="${show.hibernate.sql}" />
<property name="generateDdl" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.MySQL5Dialect" />
</bean>
</property>
</bean>
<bean id="dataSourceREST" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${database.driver}" />
...
<property name="testOnBorrow" value="true" />
</bean>
<bean id="transactionManagerREST" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactoryREST" />
</bean>
<tx:annotation-driven transaction-manager="REST"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
Some questions:
Do I need to have two tx:annotation-driven ?
Do I need to specify persistenceUnitName in the factory ?
I'm putting some notes of my digg in spring forum (LINK)
Well thats it... any help will be glad!
With Spring, you need to have only one EntityManagerFactory.
What you are looking for is describe in the Spring documentation at the chapiter 13.5.1.4 : "Deals with multiple persitence units"
I copy/paste the text :
"13.5.1.4 Dealing with multiple persistence units
For applications that rely on multiple persistence units locations, stored in various JARS in the classpath, for example, Spring offers the PersistenceUnitManager to act as a central repository and to avoid the persistence units discovery process, which can be expensive. The default implementation allows multiple locations to be specified that are parsed and later retrieved through the persistence unit name. (By default, the classpath is searched for META-INF/persistence.xml files.)
<bean id="pum" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager">
<property name="persistenceXmlLocations">
<list>
<value>org/springframework/orm/jpa/domain/persistence-multi.xml</value>
<value>classpath:/my/package/**/custom-persistence.xml</value>
<value>classpath*:META-INF/persistence.xml</value>
</list>
</property>
<property name="dataSources">
<map>
<entry key="localDataSource" value-ref="local-db"/>
<entry key="remoteDataSource" value-ref="remote-db"/>
</map>
</property>
<!-- if no datasource is specified, use this one -->
<property name="defaultDataSource" ref="remoteDataSource"/>
</bean>
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitManager" ref="pum"/>
<property name="persistenceUnitName" value="myCustomUnit"/>
</bean>
The default implementation allows customization of the PersistenceUnitInfo instances, before they are fed to the JPA provider, declaratively through its properties, which affect all hosted units, or programmatically, through the PersistenceUnitPostProcessor, which allows persistence unit selection. If no PersistenceUnitManager is specified, one is created and used internally by LocalContainerEntityManagerFactoryBean."
This exceptions means that you are trying to autowire EntityManagerFactory by type. Do you have any #Autowired annotation in your code?
Aslo, when using #PersistenceContext, set the unit attribute correctly. And (I'm not sure if this is a proper thing to do) - try setting the name attribute to your respective factory name.
Also, check if you haven't copy-pasted incorrectly the REST transaction manager - now there is no such bean REST
Ensure all of your #PersistenceContext specify unitName. I haven't figured out how to tell Spring that a particular EMF or PersistenceUnit is the default. I thought specifying primary="true" on the default EMF would work but doesn't appear to
Do I need to specify persistenceUnitName in the factory ?
If you've got multiple persistence units, you do need to specify which ones the factories will use.
More to the heart of the matter, see SPR-3955. To summarize, versions prior to Spring 3.0M4 do not support multiple transaction managers with #Transactional. Nor do I believe it honors the "unitName" attribute for #PersistenceContext, so you can't specify that either.
For an example of how I worked around this by explicitly injecting EntityManagerFactorys and using AOP to re-enable #Transactional, see my sample app
I am trying to use Envers on a project that also uses Hibernate and Spring - and I appreciate a lot the code reduction offered by HibernateTemplate.
I configured Envers under JPA, and after a few tweaks I was able to have the schema generated by the EnversHibernateToolTask Ant task (including the auditing tables). However, when I write code such as:
hibernateTemplate.saveOrUpdate(f);
the data is persisted, but nothing goes to the auditing tables. Conversely, if I write:
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
em.persist(f);
em.getTransaction().commit();
then data goest to the audit tables (but I'd rather use the former syntax - I know using JPA's EntityManager decouples that code from Hibernate, but it simple does not pay off the hassle - changing ORM engine is not in my wildest dreams for this project.)
It may help to check my applicationContext.xml configuration:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="projetox" />
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
</bean>
</property>
</bean>
<bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="packagesToScan" value="com.w2it.projetox.model" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</prop>
</props>
</property>
</bean>
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
...
</bean>
and here is my persistence.xml setup:
<persistence-unit name="projetox" transaction-type="RESOURCE_LOCAL">
<jta-data-source>java:/DefaultDS</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect" />
<!-- Hibernate Envers -->
<property name="hibernate.ejb.event.post-insert"
value="org.hibernate.envers.event.AuditEventListener" />
<property name="hibernate.ejb.event.post-update"
value="org.hibernate.envers.event.AuditEventListener" />
<property name="hibernate.ejb.event.post-delete"
value="org.hibernate.envers.event.AuditEventListener" />
<property name="hibernate.ejb.event.pre-collection-update"
value="org.hibernate.envers.event.AuditEventListener" />
<property name="hibernate.ejb.event.pre-collection-remove"
value="org.hibernate.envers.event.AuditEventListener" />
<property name="hibernate.ejb.event.post-collection-recreate"
value="org.hibernate.envers.event.AuditEventListener" />
</properties>
</persistence-unit>
Does anyone have a hint on what is going on here? Thank you!
HibernateTemplate has its JPA counterpart, JpaTemplate which provides a fairly similar functionality.
The reason Envers doesn't work with HibernateTemplate is because it relies on JPA events (you can see the listeners declared in your persistence.xml above) triggered when EntityManager is used. It's possible in theory to write code to trigger those events from Hibernate session when HibernateTemplate is used, but it's rather involved.
All u needed to do was put #Transactional in your Dao or services which call the dao.save()/ update methods.
Even if you register your eventlistener these events are not fired unless you use transcational of the Spring FW. Spring has to know and tell hibernate that these events are fired.