I'm using spring based validation in combination with hibernate validator enabled by the following in my application context:
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
....
<property name="jpaPropertyMap">
<map>
<entry key="javax.persistence.validation.factory" value-ref="validator" />
</map>
</property>
</bean>
<bean id="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean"/>
I've implemented a custom validator that accesses a database to check validity constraints for a particular object using a spring injected DAO. This results in a java.lang.StackOverflowError as it appears that the validation is called every time an object is loaded from the database from within the validator, causing an infinite loop. To get around this, I have tried setting the flush mode on my entity manager to commit from within the validator with the following code:
entityManager.setFlushMode(FlushModeType.COMMIT);
This results in an "collection not process by flush()" exception from hibernate.
Is there an example of best practice in accessing the database from within a custom validator which will allow me to get around both of these issues?
After much experimentation, it would appear that the best way to do this to is to use the EntityManagerFactory directly from within the code. In the initialize(...) method of the validator class I have the following code:
EntityManagerFactory emf = Persistence.createEntityManagerFactory("pu_name");
entityManager = emf.createEntityManager();
The downside is that you don't get Spring's DI features, but you can access the database nonetheless.
I ran into this issue as well, here is how I solved it:
#Autowired bean works with #Valid on controller but fails with CRUD repository
In a few words, I also got a reference to the EntityManagerFactory object. However, I set setFlushMode to FlushModeType.COMMIT just before calling my service method. Finally I set it back to FlushModeType.AUTO:
Here is an example:
public class UniqueUsernameValidator implements ConstraintValidator<UniqueUsername, String> {
#PersistenceContext
private EntityManager em;
#Autowired
UserService userService;
#Override
public void initialize(UniqueUsername constraintAnnotation) {
}
#Override
public boolean isValid(String username, ConstraintValidatorContext context) {
try {
em.setFlushMode(FlushModeType.COMMIT);
return userService.findByUsername(username) == null;
} finally {
em.setFlushMode(FlushModeType.AUTO);
}
}
}
Related
I have a Java 7 / Spring 3.2.17 application which has to connect to two different databases, so I have two different persistence.xml files, each one declaring its own persistence unit.
In my application context I have defined two entity manager factories, such as:
<bean id="emf1" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="myDatasource1" />
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence1.xml" />
<property name="persistenceUnitName" value="pu1" />
...
</bean>
And in my DAO classes I just let Spring inject the entity manager:
#PersistenceContext
private EntityManager entityManager;
public void setEntityManager (...) { ... }
Spring complains that I have two EM factories so it doesn't know which one to use:
NoUniqueBeanDefinitionException: No qualifying bean of type [javax.persistence.EntityManagerFactory] is defined: expected single matching bean but found 2: emf1,emf2
I have partially solved it by specifying which persistence unit I want to use, like this:
#PersistenceContext(unitName = "pu1")
private EntityManager entityManager;
public void setEntityManager (...) { ... }
That solved the problem for the classes connecting to the first database. My problem is that the classes for the other DB are part of a third-party library, so I can't modify them to add the unitName attribute. Is there any other way I can do it?
I have tried a few options, but all of them lead to the same error message:
Extending the class so I can "override the annotation":
public class MyDao extends TheDaoThatICannotModify {
#Override
#PersistenceContext(unitName = "pu2")
public void setEntityManager (EntityManager em) {
super.setEntityManager(em);
}
}
Instantiating the EM and injecting it myself:
<bean id="entityManager2" factory-bean="emf2" factory-method="getObject" />
<bean id="myDao" class="com.foo.TheDaoThatICannotModify">
<property name="entityManager" ref="entityManager2" />
</bean>
Adding the primary="true" attribute to my emf2 bean (and primary="false" to emf1).
Adding the autowire-candidate="false" attribute to my emf1 bean.
I got it working... by using Spring injection only with my own classes, and passing the EM to the evil DAOs myself:
public class MyDaoSingletonFactoryIsh {
#PersistenceContext(unitName = "pu2")
private EntityManager em; // Injected by Spring
private static TheDaoThatICannotModify dao = null;
public TheDaoThatICannotModify getDAO() {
if (dao == null) {
dao = new TheDaoThatICannotModify();
dao.setEntityManager(em);
}
return dao;
}
}
I don't know how to call this pattern: factory, singleton, wrapper? It doesn't really fit in any of these categories, it's a weird combination of them. Which isn't probably a good sign, it looks like a huge code smell and I'd prefer to avoid it. But at least it's working, so, lacking of a better solution...
I'm trying to properly use spring transaction-management functionality provided by MyBatis
I'm creating sqlSessionFactor in the following manner:
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="mapperLocations" value="classpath:some/package/**/*.xml" />
<property name="transactionFactory">
<beanclass="org.mybatis.spring.transaction.SpringManagedTransactionFactory" />
</property>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<constructor-arg ref="dataSource" />
</bean>
Now there is this section called "Programmatic Transaction Management" here which gets the reference of transactionManager and using this transactionManager we are doing rollback or commit depending upon whether we got an exception or not.
Now my question is that in my DAO layer should I explicitly do something like
public class UserDao extends SqlSessionDaoSupport {
PlatformTransactionManager transactionManager; // wired using bean-property
public void insertUser(Integer userId) {
try {
getSqlSession().insert("user-map.insertUser", userId);
} catch (Exception e) {
transactionManager.rollback(txStatus);
throw e;
}
transactionManager.commit(txStatus);
}
}
or just using the following thing (without programmatic transactions) will also perform all the insertions in a transactional way.
public class UserDao extends SqlSessionDaoSupport {
public void insertUser(Integer userId) {
getSqlSession().insert("user-map.insertUser", userId);
}
}
my mapper file looks something like this:
<insert id="insertUser" parameterType="HashMap">
<!-- this contains multiple insert queries -->
</insert>
Note that I've multiple inserts inside <insert>...</insert> and I want either all of them to happen or non-of them to happen.
This is another reference that I was using.
So a general question is that will MyBatis provide an automatic transaction-management around my <insert>...</insert> or will I have to explicitly use the transactionManager to achieve the transaction-management feature?
Here's the quote from the documentation you referenced:
MyBatis SqlSession provides you with specific methods to handle transactions programmatically. But when using MyBatis-Spring your beans will be injected with a Spring managed SqlSession or a Spring managed mapper. That means that Spring will always handle your transactions.
With the setup you provided transaction timespan is completely managed by spring that is if
you use declarative transaction management you don't need to do anything additionally. Spring will start transaction at the point it is directed to by your configuration.
The simple way to enable declarative transaction management is to add this to spring configuration:
<tx:annotation-driven/>
And then use #Transactional on your service methods:
#Service
public class MyService {
#Autowired
private UserDao userDao;
#Transactional
public addUser(User user) {
userDao.insertUser(user);
}
}
The section in the documentation you mentioned is about the (rare) cases when you want to use programmatic transaction management.
I have spring based multimodule application. And in my DAO module the DB (embedded derby) is started and created by the class the implements ApplicationListener.
Problem that in the logs the huge stacktrace from Spring which say that there is no db(couldn't get connection).
Still, my application works without any problems. This stacktrace appeared before the ApplicationListener invoked and the db is created. Actually, I see it only when I am starting the application the first time on the machine, because the db created only this time, than it just used.
So my question is whow to avoid this exception in logs? Maybe there is spring or hibenate setup not connect to the db before the application context fully loaded? Or invoke the code that creates db by some other listener?
Well here is the way I do : the ROOT context contains the datasource, the dao, the service and the transaction manager. In XML config, the declaration of the database is :
<bean id="datasource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"
p:url="jdbc:derby:/path/to/database;create=TRUE"
p:username="user" p:password="pwd"
p:driverClassName="org.apache.derby.jdbc.EmbeddedDriver"/>
it can then be used to declare a session factory for hibernate and an associated DAO as :
<bean class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"
id="sessionFactory" p:dataSource-ref="datasource">
<!-- hibernate config -->
...
</bean>
<bean class="org.springframework.orm.hibernate4.HibernateTransactionManager"
name="transactionManager" p:sessionFactory-ref="sessionFactory"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="myDao" class="... .myDaoImpl" p:sessionFactory-ref="sessionFactory" .../>
That way all is created by spring, that ensures that the creation order is correct. Of course the same is possible in Java config with the same logic.
I suppose you are fetching some data from database from inside spring beans that are being created. Perhaps thru #PostConstruct or other way. Remember that until spring context is fully loaded some beans can have injected uninitialized beans.
So do not use DB, do not call any DAOs until you are sure that spring context is fully initialized.
To do such initial calls to DAOs try such patter that guarantees spring context completness:
#Component
public class SpringContextMonitor implements ApplicationListener<ApplicationEvent> {
#Autowired
private SomeDao dao;
...
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ContextRefreshedEvent) {
onStart((ContextRefreshedEvent) event);
}
}
private void onStart(ContextRefreshedEvent event) {
// do your initialization here
dao.getSomething();
dao2.getSomething();
...
}
...
}
The onStart method in above example is place where you are sure that all beans are fully initialized
I'm using Spring 3.0.6, with Hibernate 3.2.7.GA in a Java-based webapp. I'm declaring transactions with #Transactional annotations on the controllers (as opposed to in the service layer). Most of the views are read-only.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection. The reason they're outside the transaction is that I'm using converters on method parameters in the controller, like so:
#Controller
#Transactional
public class MyController {
#RequestMapping(value="/foo/{fooId}", method=RequestMethod.GET)
public ModelAndView get(#PathVariable("fooId") Foo foo) {
// do something with foo, and return a new ModelAndView
}
}
public class FooConverter implements Converter<String, Foo> {
#Override
public Foo convert(String fooId) {
// call FooService, which calls FooJdbcDao to look up the Foo for fooId
}
}
My JDBC DAO relies on SimpleJdbcDaoSupport to have the jdbcTemplate injected:
#Repository("fooDao")
public class FooJdbcDao extends SimpleJdbcDaoSupport implements FooDao {
public Foo findById(String fooId) {
getJdbcTemplate().queryForObject("select * from foo where ...", new FooRowMapper());
// map to a Foo object, and return it
}
}
and my applicationContext.xml wires it all together:
<mvc:annotation-driven conversion-service="conversionService"/>
<bean id="conversionService" class="org.springframework.context.support.ConversionServiceFactoryBean">
<property name="converters">
<set>
<bean class="FooConverter"/>
<!-- other converters -->
</set>
</property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"
p:sessionFactory-ref="sessionFactory" />
FooConverter (which converts a path variable String to a Foo object) gets called before MyController#get() is called, so the transaction hasn't been started yet. Thus when FooJdbcDAO is called to query the database, it has no way of reusing the SessionFactory's connection, and has to check out its own connection from the pool.
So my questions are:
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
If the answer to #1 is no, then is there a way to configure OpenSessionInViewFilter to just start a transaction for us, at the beginning of the request? I'm using "on_close" for the hibernate.connection.release_mode, so the Hibernate Session and Connection are already staying open for the life of the request.
The reason this is important to me is that I'm experiencing problems under heavy load where each thread is checking out 2 connections from the pool: the first is checked out by hibernate and saved for the whole length of the thread, and the 2nd is checked out every time a JDBC DAO needs one for a query outside of a transaction. This causes deadlocks when the 2nd connection can't be checked out because the pool is empty, but the first connection is still held. My preferred solution is to make all JDBC DAOs participate in Hibernate's transaction, so that TransactionSynchronizationManager will correctly share the one single connection.
Is there any way to share a database connection between the SessionFactory and my JDBC DAOs? I'm using HibernateTransactionManager, and from looking at Spring's DataSourceUtils it appears that sharing a transaction is the only way to share the connection.
--> Well you can share database connection between SessionFactory and JdbcTemplate. What you need to do is share same datasource between the two. Connection pooling is also shared between the two. I am using it in my application.
What you need to do is configure HibernateTransactionManager for both transactions.
Add JdbcDao class(with properties jdbcTemplate and dataSource with getter-setter) in your existing package structure(in dao package/layer), Extend your jdbc implementation classes by JdbcDao. If you have configured, hibernateTxManager for hibernate, you will not need to configure it.
The problem is, I've got some DAOs which are using JdbcTemplate to query the database directly with SQL, and they're being called outside of a transaction. Which means they're not reusing the Hibernate SessionFactory's connection.
--> You may be wrong here. You may be using same connection, I think, only problem may lie in HibernateTransaction configuration.
Check HibernateTransactionManager javadoc : This transaction manager is appropriate for applications that use a single Hibernate SessionFactory for transactional data access, but it also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access Hibernate and services which use plain JDBC (without being aware of Hibernate)!
Check my question : Using Hibernate and Jdbc both in Spring Framework 3.0
Configuration : Add dao classes and service classes with your current hibernate classes, do not make separate packages for them, If you want to work with existing configuration. Otherwise configure HibernateTransactionManager in xml configuration and Use #Transactional annotation.
Mistake in your code :
#Controller
#Transactional
public class MyController {......
Use #Transactional annotation in service classes(best practice).
Correction :
#Transactional(readOnly = true)
public class FooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
I've searched around for this question, there's quite a few of them here on StackOverflow and Google but I can't seem to get anything working for me.
here are my codes
Spring config: (I dont use any pointcut - I think I dont need to?)
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
...
</bean>
<bean id="hibernateSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
...
</bean>
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="hibernateSessionFactory"/>
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
I have a Service class:
#Service
public class ServiceImpl implements ServiceInterface
{
/**
* Injected session factory
*/
#Autowired(required=true)
private SessionFactory sessionFactory;
#Autowired(required=true)
private Dao myDao;
/**
* {#inheritDoc}
*/
#Transactional(rollbackFor=Exception.class, propagation=Propagation.REQUIRED)
public void scheduleBlast(BlastParameters blastParameters) throws ServiceException
{
... do bunch of stuff ..
myDao.persist(entity)
if(true)
throw new ServiceException("random error")
}
.. setter methods and other stuff ..
}
and a Dao class:
public class DaoImpl implements DaoInterface
{
#Autowired(required=true)
private SessionFactory sessionFactory
/**
* {#inheritDoc}
*/
#Transactional(propagation=Propagation.MANDATORY)
public void persist(Entity e) throws DaoException
{
try
{
sessionFactory.getCurrentSession().persist(e);
}
catch(Exception ex)
{
throw new DaoException(ex);
}
}
.. setter methods and other stuff ..
}
Some unnecessary details are eliminated (eg. missing setter, etc), assume that code works perfectly fine.
My problem with the above is that when I added the throw random exception line, it does not rollback, the object being persisted through the DAO stays in the db.
I am using Spring 3.1 and Hibernate 3.6 (because there was a bug with Hibernate 4.0 on Spring 3.1)
Thoughts?
Thank you
I found the cause of my problem and why the transaction (seemingly) not managed properly.
Somewhere in my code
/**
* {#inheritDoc}
*/
#Transactional(rollbackFor=Exception.class, propagation=Propagation.REQUIRED)
public void doWork(Parameters param) throws ServiceException
{
... do bunch of stuff ..
myDao1.persist(entity)
-- Some logic here --
... do bunch of stuff ..
myDao2.persist(entity2)
if(true)
throw new ServiceException("random error")
}
The part where it says "-- Some logic here --", there was some logic done that uses raw SQL and call on execute update:
Query query = sessionFactory.getCurrentSession().createSQLQuery(queryText);
query.executeUpdate();
And because it's not using Hibernate query, and instead using raw SQL execution, it caused a flush to be called and thus any work done prior to the call will be committed along with
it.
I re-work the flow of logic to account for this logic to make sure transaction is managed properly. While using raw SQL might be an indication that there's something wrong - it was something necessary to be done due to the things that the service try to accomplish and to improve the performance of the service.
That is the intended behavior of transaction management.
The default behavior for #Transactional is to rollback only for runtime exceptions.
If you want your stuff to rollback after throwing DaoException then add it to the rollback exception list. Don't forget to also include RuntimeException also.
Try the following on the Dao class
#Transactional(propagation=Propagation.Mandatory, rollbackFor={RuntimeException.class, DaoException.class})
Try remove the #Transactional annotation from the DaoImpl class. I suspect what might be happening is that the transaction is being committed when it crosses back over that transaction boundary (DaoImpl). I've had mixed success with this setup. You can try some different transaction approaches to the "inner" transaction.
The other thing you can do is turn on spring transaction logging. It think its category org.springframework.transaction or something. That way you will see exactly what it is doing w.r.t to roll back and commit of transactions...
you don't have one of those JDBC drivers that are in AUTOCOMMIT mode by default, do you?
I have the same issue when open connections manually in my Spring Repository. After I start using JdbcTemplate error disappeared
I used
#Autowired
private JdbcTemplate jdbc;
in my Repository
and then
jdbc.query(sql, new RowMapper()); or jdbc.update(sql); or jdbc.queryForObject(sql, new RowMapper()); dependent on how many objects you return by your query.
Also I defined RowMapper():
public class RowMapper implements RowMapper<YourDTOObject> {
#Override
public YourDTOObject mapRow(ResultSet resultSet, int i) throws SQLException {
YourDTOObject dto = new YourDTOObject();
dto.setId(resultSet.getInt("id")); // here you can change to your column names
dto.setName(resultSet.getString("name"));
dto.setAmount(resultSet.getBigDecimal("amount"));
return a;
}