I am using Spring with JDBC and found that it is autocommit.
How can I config to turn it off in spring-servlet.xml?
This is my current configuration:
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"
p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}" p:username="${jdbc.username}"
p:password="${jdbc.password}" />
<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
It seems that my configuration missed this line:
<tx:annotation-driven transaction-manager="txManager"/>
Then, in my service classes, I use #Transactional annotation. For example
#Service
class CompanyServiceImpl implements CompanyService{
#Autowired
private CompanyDAO companyDAO;
#Transactional
public void addCompany(Company company) {
companyDAO.addCompany(company); // in here, there is JDBC sql insert
companyDAO.addCompany_fail(company); // just for test
}
}
If there is a exception happening in the addCompany_fail(), the first addCompany() one will also be rollbacked.
I followed this document to understand idea how transaction controlled in Spring.
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/transaction.html
I followed this document to understand how to code with JDBC in Spring.
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jdbc.html
I also read this (Free) http://www.infoq.com/news/2009/04/java-transaction-models-strategy. It is really good one. And I feel the same with the writer that most people do not understand (or care) about transaction.
PS:
Seem that many people misunderstand that using such Hibernate/Spring framework is only for avoid complexity of JDBC and Transaction Control. Many people think like "JDBC and Transaction are so complex, just use Hibernate and forget about those two". Many examples on the internet about Spring+Hibernate or Spring+JDBC seemingly not care about transaction at all. I feel that this is a bad joke. Transaction is too serious for just letting something handle it without truly understanding.
Hibernate and Spring is so powerful and so complex. Then, as someone said, "Great power comes with responsibilities".
UPDATE: 2013-08-17: There are good example about transaction here http://www.byteslounge.com/tutorials/spring-transaction-propagation-tutorial. However, this is not explain that if you want to use REQUIRES_NEW, why you need to create another class (otherwise you will get this problem Spring Transaction propagation REQUIRED, REQUIRES_NEW , which it seems REQUIRES_NEW does not really create a new transaction)
Update: 2018-01-01: I have created a full example with Spring Boot 1.5.8.RELEASE here https://www.surasint.com/spring-boot-database-transaction-jdbi/
and some basic experiment examples here https://www.surasint.com/spring-boot-connection-transaction/
Try defaultAutoCommit property. Code would look like this:
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"
p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}" p:username="${jdbc.username}"
p:password="${jdbc.password}"
p:defaultAutoCommit="false" />
Look at javadoc:
http://commons.apache.org/dbcp/apidocs/org/apache/commons/dbcp/BasicDataSource.html#defaultAutoCommit
You can't, simply run your code within a transaction, Spring will automatically disable auto-commit for you. The easiest (at least to set-up) way to run a piece of code in a transaction in Spring is to use TransactionTemplate:
TransactionTemplate template = new TransactionTemplate(txManager);
template.execute(new TransactionCallback<Object>() {
public Object doInTransaction(TransactionStatus transactionStatus) {
//ALL YOUR CODE ARE BELONG TO... SINGLE TRANSACTION
}
}
Related
This is regarding Spring OpenSessionInViewFilter using with #Transactional annotation at service layer.
i went through so many stack overflow post on this but still confused about whether i should use OpenSessionInViewFilter or not to avoid LazyInitializationException
It would be great help if somebody help me find out answer to below queries.
Is it bad practice to use OpenSessionInViewFilter in application
having complex schema.
using this filter can cause N+1 problem
if we are using OpenSessionInViewFilter does it mean #Transactional not required?
Below is my Spring config file
<context:component-scan base-package="com.test"/>
<context:annotation-config/>
<bean id="messageSource"
class="org.springframework.context.support.ReloadableResourceBundleMessageSource">
<property name="basename" value="resources/messages" />
<property name="defaultEncoding" value="UTF-8" />
</bean>
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"
p:location="/WEB-INF/jdbc.properties" />
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"
p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}" p:username="${jdbc.username}"
p:password="${jdbc.password}" />
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="configurationClass">
<value>org.hibernate.cfg.AnnotationConfiguration</value>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${jdbc.dialect}</prop>
<prop key="hibernate.show_sql">true</prop>
<!--
<prop key="hibernate.hbm2ddl.auto">create</prop>
-->
</props>
</property>
</bean>
<tx:annotation-driven />
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
OpenSessionInView is a servlet filter than just Open a hibernate session and store it in the SessionHolder for the thread that is serving the request. With this session opened, hibernate can read the Lazy initialized collections and objects when you use this in the rendering stage of the request. This session can be accessed when you invoke SessionFactory.getCurrentSession().
But, OpenSessionInView just opens the session and it doesn't begin any transactions. With a session opened you can read objects from database but, if you want to do something in a transaction you need #Transactional annotations or other mechanism to demarcate the begin and the end of the transaction when you want.
Then the answer of the questions:
Is it bad practice to use OpenSessionInViewFilter in application having complex schema.
This is a good practice if you need avoid the LazyInitializationException and the overload is just open new Hibernate Session and close it at the end of the request for each request.
Using this filter can cause N+1 problem
I use this filter in many projects and not cause any problem.
if we are using OpenSessionInViewFilter does it mean #Transactional not required?
No. You only have a Hibernate Session opened in the SessionHolder of the thread, but if you need Transactions you need put #Transactional.
Throwing in my 0.02c here (and expanding on Fernando Rincon's excellent answer):
You shouldn't be using a OpenSessionInView filter just because you need to get around a LazyInitializationException. Its just going to add another layer of confusion and complexity to your system. You should know from your system design exactly where you are going to need to access collections on the front end. From there, it's easy and (in my experience) more logical to build a controller method to call a service method to retrieve your collection.
However if you have another problem that using the OpenSessionInView filter solves, and as a happy side effect you then have a session open, then I don't see the harm in using it to access your collections. However, I'd say that if you use the OpenSessionInView to fetch a collection object in one place, you should refactor your code in other places to do the same thing so as the strategy used to fetch collections is standardised across your application.
Weigh up the costs of this refactor against the cost of writing the controller & service methods to determine if you should be using a OpenSessionInView filter.
OpenSessionInViewFilter is a servlet filter that binds a hibernate session to http request and for all db operations, transactional and non transactional, same hibernate session is used for a given http request. This exposes db layer to web layer that makes it anti-pattern.
My experience is that this makes the code difficult to debug when we want to make changes to java objects and do not want those to get reflected in database. Since the hibernate session is always open, it expects to flush the data in database.
This should be used only when JS base rest services are there with no service layer in between.
The typical usage pattern for OpenSessionInViewFilter is that some Entity is lazily loaded but during the view rendering phase the view needs some attribute of this Entity that was not loaded initially thus necessitating the need to fetch this data from the database. Now typically the transaction demarcation is made to happen in the service layer of your web application so by the time the view rendering takes place the view is working with a detached entity which results in a LazyInitializationException when accessing the unloaded attribute.
From this url https://developer.jboss.org/wiki/OpenSessionInView :
The problem
A common issue in a typical web-application is the rendering of the view, after the main logic of the action has been completed, and therefore, the Hibernate Session has already been closed and the database transaction has ended. If you access detached objects that have been loaded in the Session inside your JSP (or any other view rendering mechanism), you might hit an unloaded collection or a proxy that isn't initialized. The exception you get is: LazyInitializationException: Session has been closed (or a very similar message). Of course, this is to be expected, after all you already ended your unit of work.
A first solution would be to open another unit of work for rendering the view. This can easily be done but is usually not the right approach. Rendering the view for a completed action is supposed to be inside the first unit of work, not a separate one. The solution, in two-tiered systems, with the action execution, data access through the Session, and the rendering of the view all in the same virtual machine, is to keep the Session open until the view has been rendered.
As an alternative, consider loading the Entity with just the right amount of data required by your view. This can be accomplished by using DTO projections. This article lists some of the downsides of using the Open Session In View pattern : https://vladmihalcea.com/the-open-session-in-view-anti-pattern/
I have a problem, where Spring is injecting proxy to DAO object into service, but this service is injected into controller it is concrete class. This does not allow me to use service-wide transaction and launches transaction for each DAO call separatly. It's behavious I would expect.
Configuration:
Controller is class with #Controller annotation and constructor DI.
Service:
#Component
#Transactional
public class UserServiceImpl implements UserService { ...}
Dao:
#Component
#Transactional
public class UserDaoImpl implements UserDao {
JPA Config:
<bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/>
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" >
<property name="dataSource" ref="dataSource"/>
<property name="persistenceUnitName" value="xxxPersistenceUnit"/>
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml"/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
</bean>
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.format_sql">${hibernate.format_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop>
</props>
</property>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
<tx:annotation-driven />
Anyone has any idea why is it happening?
Most likely your UserServiceImpl is created in the servlet context by mistake - please check context:component-scan expressions to check that only Controller classes are included there.
See #Service are constructed twice
for examples of component scan filters.
For example if transaction manager bean and <tx:annotation-driven> are declared in the root web app context, then the transaction proxies will be created only for the beans in the root app context (from Spring Documentation):
BeanPostProcessor interfaces are scoped per-container. This is only
relevant if you are using container hierarchies. If you define a
BeanPostProcessor in one container, it will only do its work on the
beans in that container. Beans that are defined in one container are
not post-processed by a BeanPostProcessor in another container, even
if both containers are part of the same hierarchy.
Less likely is that the transactional configuration of the user service is configured to use another transaction manager(or another default propagation), but in that case TransactionInterceptor invocation would be present in the stack trace of DAO method.
It's absolutely OK to have #Transactional on the DAO classes in Spring, if you understand what you are doing - the idea that repository or DAO cannot open transactions comes from the dark times when you had to create a boilerplate code to open transactions and it was hard to manage the transaction instances(and you could not be sure on how transactions are managed). But when you are using declarative configuration the things are not that bad. Spring promotes convention-over-configuration style where most methods use Propagation.REQUIRED transaction mode. In Spring Propagation.REQUIRED is the default mode when you decorate methods with #Transactional(this propagation is hardcoded to #Transactional annotation declaration), that means that the new logical transaction is mapped to the same physical transaction, so decorating your DAO classes with #Transactional is harmless.
See http://static.springsource.org/spring/docs/3.2.x/spring-framework-reference/html/transaction.html#tx-propagation for the reference on transaction propagation in Spring
In Spring Data JPA(I'm pretty sure that they know what they are doing), for example, CRUD methods on repository instances are transactional by default. That may be useful in some cases, the mechanism is same as when Hibernate allows you to get() some arbitrary objects from the Session for displaying without declaring an explicit transaction(of course it does not mean that the framework somehow manages to go without transaction - it's implicit in this case).
I'm having a little trouble understanding what you're saying, but it appears that you're surprised that you're getting a new transaction for every DAO call, instead of just on the service call. Unfortunately, that's exactly what you've specified by putting "#Transactional" on the DAO class. Your DAO should not be transactional, at least if you're following the usual pattern. If I've understood you correctly, you should remove the #Transactional annotation on your DAO class.
The other responders are correct in that you should not be annotate your DAO as #Transactional, but to really understand what is happening you should refer to the Transaction Propagation section in the Spring Reference Manual. The default propagation when using #Transactional is REQUIRES_PROPAGATION, so review that specifically.
Your question isn't that specific so I'm not sure exactly what you're looking for.
Edit: Upon re-reading your question it's possible that there may be an issue with your component scanning. Check to make sure that your <tx:annotation-driven /> is in the same application context where you're component scanning your service classes.
You shouldn't use that "#Transactional" annotation in your DAO object. You are defining it in your Service and that will grant that all your DAOs methods, called inside a service method, are executed within the same transaction, which seems to be exactly what you want, when you say "service-wide transaction", right?
Also, as suggested, you might want to change your annotation from "#Component" to "#Service" in UserServiceImpl and to "#Repository" in UserDaoImpl.
Best regards.
I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />
I have a code with this structure (there are a lot of classes, but schema is like this:
void f() {
MyObj o = db.getById(id);
o.setField1(value);
db.update(o);
o = db.getById(id);
assertEquals(value, o.getField());
}
update and get methods use the same data source, incjected with Spring. get works via JdbcTemplate and update just takes connection from dataSource and uses raw JDBC.
Update is marked with #Transactional annotation.
here is a definition of transacion manager from Spring config:
<tx:annotation-driven transaction-manager="TransactionManager"/>
<bean id="TransactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
The issue is that if I use update and after it get in different calls of webservice methods, that use them, for exampple, the result is correct and I get updated values.
And if I call them sequentially in one unit-test method after update I don't see updated value.
I can't post the whole read/write code here, because it is large and splitted into many files, but probably you have some ideas how to fix it.
Thanks.
You have to flush the update before you can see it in the select.
you can try
entityManager.refresh(yourEntity);
this way, you will get your entities recent instance where you use it after this row.
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="data.emf" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="transactionManager2" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="data.emf" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager2" />
In my service layer, can I use #Transactional(name="transactionManager2"); to identify which transaction manager I use if I have multiple transaction managers?
You can specify which tx manager to use with #Transactional using the value attribute:
A qualifier value for the specified
transaction.
May be used to determine the target
transaction manager, matching the
qualifier value (or the bean name) of
a specific PlatformTransactionManager
bean definition.
For example:
#Transactional("txManager1");
Alternatively, you can use the more explicit TransactionProxyFactoryBean, which gives you finer-grained control over what objects gets proxied by what tx managers. This still uses the annotations, but it doesn't auto-detect beans, it's configured explicitly on a bean-by-bean basis.
This normally isn't an issue, but it's not wise to have multiple transaction managers unless you have a very good reason to do so. If you find yourself needing two tx managers, it's usually better to see if you can make do with one. For example, if you have two data sources configured in your app server, you can incorporate both in a single JtaTransactionManager, rather than two seperate JpaTransactionManager or DataSourceTransactionmanagers.
More on the need for more than one transaction manager. You might be trying to do nested or separate transactions in sequence -- then you can use different propagation settings. You can achieve that with configuration using single transaction manager see Transaction propagation.