Hibernate creates too many connections using #Transactional, how to prevent this? - java

I'm fairly new to Hibernate and PostgreSQL, but so far it's going well, although I'm running into a problem now that I can't solve. I'm getting an error while filling the database on the very first operation (which is one transaction inserting or updating 1000 rows in the database). The error is:
SQL Error: 0, SQLState: 53300
FATAL: sorry, too many clients already
Exception in thread "main" org.hibernate.exception.GenericJDBCException: Cannot open connection
This is the important code:
#Repository
public class PDBFinderDAO extends GenericDAO<PDBEntry> implements IPDBFinderDAO {
#Override
#Transactional
public void updatePDBEntry(Set<PDBEntry> pdbEntrySet) {
for (PDBEntry pdbEntry : pdbEntrySet) {
getCurrentSession().saveOrUpdate(pdbEntry);
}
}
}
getCurrentSession() is extended from GenericDAO and calls sessionFactory.getCurrentSession().
This is my Hibernate configuration:
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<!-- Database connection settings -->
<property name="hibernate.connection.driver_class">org.postgresql.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="hibernate.connection.url">jdbc:postgresql://localhost/PDBeter</property>
<property name="hibernate.connection.username">xxxx</property>
<property name="hibernate.connection.password">xxxx</property>
<!-- Create or update the database schema on startup -->
<property name="hbm2ddl.auto">create</property>
<!-- Use the C3P0 connection pool provider -->
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.idle_test_period">300</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<!-- Batch size -->
<property name="hibernate.jdbc.batch_size">50</property>
<!-- this makes sure the more efficient new id generators are being used,
though these are not backwards compatible with some older databases -->
<property name="hibernate.id.new_generator_mappings">true</property>
<!-- Echo all executed SQL to stdout -->
<!--
<property name="hibernate.show_sql">true</property>
-->
<property name="format_sql">true</property>
<property name="use_sql_comments">true</property>
</session-factory>
</hibernate-configuration>
This is my Spring configuration:
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd">
<context:component-scan base-package="nl.ru.cmbi.pdbeter" />
<!-- Transaction Manager -->
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<tx:annotation-driven />
<!-- Session Factory -->
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="configLocation" value="hibernate.cfg.xml" />
<property name="packagesToScan" value="nl.ru.cmbi.pdbeter.core.model.domain" />
</bean>
<!-- Task Executor -->
<task:annotation-driven />
</beans>
I'm not really sure what is going wrong, this should only open one connection every time, and close it afterwards, isn't that what #Transactional is supposed to do? Also, is there an easy way to check how many connections are open at a certain time so that I can check before and after the error how many connections were open?
EDIT: when I check the database nothing has been added, so it can't even make one connection, what is going wrong here?
EDIT: I'm sorry but I already solved it myself. It was a very stupid mistake, there was a very small query using criteria that was executed 1000 times as well, but that one was executed before the transaction, causing it to be executed in 1000 separate transactions/sessions/connections (I think, correct me if I'm wrong!)
EDIT: ok, turns out that didn't solve it at all, cause I needed that small query to see if something was already in the database, and if so, get that object from the database so I could update it's fields/columns/whatever you want to call it.
This is the method in the GenericDAO:
#Override
public PDBEntry findByAccessionCode(String accessionCode) {
return (PDBEntry) createCriteria(Restrictions.eq("accessionCode", accessionCode)).uniqueResult();
}
There is a function that builds the mapped objects that isn't in the DAO, since it converts a raw datafile into the database object, so I wanted to keep that out of database operations and only put the saveOrUpdate() within the database module. The problem I have now is that the findByAccessionCode() is being called a 1000 times during the conversion of the raw datafile to the database objects, because I need to check whether a certain piece of data is already present in the database, and if so, get the object from the database instead of making a new one.
Now how would I execute that query a 1000 times inside one connection in this context? I tried making that conversion method that converts the 1000 files #transactional, but that didn't work.
Here's the conversion method:
private void updatePDBSet(Set<RawPDBEntry> RawPDBEntrySet) {
Set<PDBEntry> pdbEntrySet = new LinkedHashSet<PDBEntry>();
for (RawPDBEntry pdb : RawPDBEntrySet) {
PDBEntry pdbEntry = pdbEntryDAO.findByAccessionCode(pdb.id);
if (pdbEntry == null) {
pdbEntry = new PDBEntry(pdb.id, pdb.header.date);
}
pdbEntry.setHeader(pdb.header.header);
ExpMethod expMethod = new ExpMethod.Builder(pdbEntry, pdb.expMethod.expMethod.toString()).build();
if (pdb.expMethod.resolution != null) {
expMethod.setResolution(pdb.expMethod.resolution);
}
if (pdb.expMethod.rFactor != null) {
expMethod.setRFactor(pdb.expMethod.rFactor.rFactor);
if (pdb.expMethod.rFactor.freeR != null) {
expMethod.setFreeR(pdb.expMethod.rFactor.freeR);
}
}
if (pdb.hetGroups != null) {
for (PFHetId hetId : pdb.hetGroups.hetIdList) {
HetGroup hetGroup = new HetGroup(pdbEntry, hetId.hetId);
if (hetId.nAtom != null) {
hetGroup.setNAtom(hetId.nAtom);
}
if (hetId.name != null) {
hetGroup.setName(hetId.name);
}
}
}
for (PFChain chain : pdb.chainList) {
new Chain(pdbEntry, chain.chain);
}
pdbEntrySet.add(pdbEntry);
}
pdbFinderDAO.updatePDBEntry(pdbEntrySet);
}
(The pdbFinderDAO.updatePDBEntry(pdbEntrySet) was where I originally thought the problem originated)
EDIT: First of all sorry that I created this new post, I really thought I found the answer, but I'll just continue in this post for further edits.
Ok, now I put all the 1000 findAccessionCode criteria inside the DAO by sending a Set of the raw data files to the DAO so it can retrieve the id's there, then finding them in the database, getting the ones it can find and adding it to a HashMap where the database object is mapped with the reference to the raw data file as key (so I know what raw data belongs to what database entry). This function I made #Transactional like so:
#Override
#Transactional
public Map<RawPDBEntry, PDBEntry> getRawPDBEntryToPDBEntryMap(Set<RawPDBEntry> rawPDBEntrySet) {
Map<RawPDBEntry, PDBEntry> RawPDBEntryToPDBEntryMap = new HashMap<RawPDBEntry, PDBEntry>();
for (RawPDBEntry pdb : rawPDBEntrySet) {
RawPDBEntryToPDBEntryMap.put(pdb, (PDBEntry) createCriteria(Restrictions.eq("accessionCode", pdb.id)).uniqueResult());
}
return RawPDBEntryToPDBEntryMap;
}
Still, no success... I get the exact same error, but it does tell me it's the criteria that causes it. Why can't I execute all these 1000 queries within the same connection?
EDIT: Yet another update: I tried adding all the queries 1 by 1, and this worked, slowly, but it worked. I did this on an empty database. Next I tried the same thing but now the database already contained the stuff from the first try, and I got the following error:
Exception in thread "main" org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I'm guessing this has something to do with the fact that I'm getting the objects (that are already in the database and so have to be updated) within the DAO, then sending references back to the conversion method, then changing their fields, then sending them back to the DAO to make them persistent. Although after googling a bit I found people that had problems with collections in their POJO's with the annotation:
#OneToMany(mappedBy = "pdbEntry", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
where the cascading caused the problem. Do I have to remove all of these cascades and hardcode the saveOrUpdate() operations for all the different mapped objects? Or has this nothing to do with that error?
And finally: I'm still no closer to figuring out how to do this for 1000 objects at a time.

Solved the problem, it had to do with a bad setup of Spring, which caused the #Transactional to not be recognized. I fixed that, and then the error went away.

Related

Spring/Hibernate data access concurrency problem

I've been given full responsibility for a bunch of large projects that I didn't write myself and I am hoping for some advice here because these projects have issues that are keeping me up at night and much of this is new to me.
I've noticed that database queries through the ORM will sometimes not find an entity that had requested to be saved already, causing invalid state and crashes. The problem becomes more apparent with more concurrent threads that access the database through the ORM.
All the assistance I can get with troubleshooting would be much appreciated.
Relevant dependencies
Web framework: Spring MVC
ORM: Hibernate
DBMS: MySQL 5.6
ehcache: 2.10.6
Hibernate: 5.1.17.Final
Hibernate validator: 5.4.3.Final
MySQL connector: 5.1.48
Spring: 4.3.29.RELEASE
Spring integration: 4.3.23.RELEASE
Spring security: 4.2.19.RELEASE
The web app runs under Tomcat 7 with Java 8 runtime.
We're behind on several dependencies and I want to tackle that another day.
Troubleshooting
I read that EntityManager by itself is not thread-safe and that #PersistenceContext turns it into a proxy that allows thread-safe access to the underlying EntityManager. I can also see that it's some sort of proxy in the debugger. Now I don't know exactly how that works and whether the way we use it is really safe.
Despite that, I notice the following:
An incoming HTTP request causes an entity to be saved to the database through the ORM.
Shortly afterwards, another incoming HTTP request wants to find the same entity through the ORM.
Most of the time the entity is as expected found, but sometimes it is not found despite the fact that it should have been saved already.
If I try to find the entity on the same thread right after saving it then it's always found, but if I launch a new thread right after saving it then the entity may not be found in that thread unless I Thread.sleep() before the lookup.
I can see in the debugger that the same instance of SomeService, SomeDaoImpl and the EntityManager proxy in the example code below is reused for multiple incoming HTTP requests and handled by different threads.
Even though our own classes are stateless, there is apparently a problem with data access through the ORM.
Greatly simplified pseudocode
interface Dao {
Entity save(entity);
};
abstract class AbstractDao implements Dao {
#PersistenceContext
private EntityManager em; // proxy
#Override
public Entity save(entity) {
return em.persist(entity); // or this.em.merge(entity)
}
protected EntityManager getEntityManager() {
return em;
}
}
abstract class AbstractUuidDao extends AbstractDao {
public Entity findByUuid(uuid) {
em = getEntityManager();
query = em.getCriteriaBuilder().createQuery(...);
// ...
return em.createQuery(query).setParameter(..., uuid).getSingleResult();
}
}
interface SomeDao extends Dao {};
#Repository
class SomeDaoImpl extends AbstractUuidDao implements SomeDao {}
#Service("someService")
#Transactional(readOnly = true, propagation = Propagation.SUPPORTS)
class SomeService {
#Autowired
private SomeDao someDao;
// the first HTTP request calls this
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
void createSomething() {
uuid = "random uuid";
someDao.save(new SomeEntity(uuid));
someDao.findByUuid(uuid); // always returns non-null
}
// after createSomething() returns, the transaction is committed
// two HTTP requests can call this simultaneously but they come in after createSomething() has returned, and therefore after the transaction has already been committed
void getSomething(uuid) {
// uuid is the same "random uuid"
someDao.findByUuid(uuid); // can return either null or non-null
}
}
I believe this is the persistence configuration:
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="databasePlatform" value="org.hibernate.dialect.MySQL5Dialect" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="persistenceUnitName" value="model" />
<property name="jpaDialect">
<bean class="custom class here" />
</property>
<property name="jpaVendorAdapter" ref="jpaVendorAdapter" />
<property name="jpaPropertyMap">
<map>
<entry key="hibernate.connection.release_mode" value="after_transaction" />
<entry key="hibernate.current_session_context_class" value="org.springframework.orm.hibernate4.SpringSessionContext" /> <!-- also tried "org.springframework.orm.hibernate5.SpringSessionContext" -->
<entry key="hibernate.cache.use_query_cache" value="true" /> <!-- also tried "false" -->
<entry key="hibernate.cache.use_second_level_cache" value="true" /> <!-- also tried "false" -->
<entry key="hibernate.cache.use_minimal_puts" value="false" />
<entry key="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory" />
<entry key="hibernate.default_batch_fetch_size" value="8" />
<entry key="hibernate.jdbc.fetch_size" value="10" />
<entry key="hibernate.jdbc.batch_size" value="100" />
<entry key="hibernate.order_inserts" value="true" />
<entry key="hibernate.order_updates" value="true" />
<entry key="hibernate.format_sql" value="false" />
<entry key="hibernate.show_sql" value="false" />
</map>
</property>
</bean>
I'm sorry for the lack of real code and details but it is difficult for me to provide a working example without making the example more complex than needed.
Any ideas would be much appreciated.
Update 1
I revised the example code in my original post to more correctly reflect the real use case, but it does not change much.
I can see that everything happens in the expected sequence, i.e. the transaction is committed and subsequent queries only happen afterwards.
I replaced our custom JPA dialect with org.springframework.orm.jpa.vendor.HibernateJpaDialect since it seems like the custom class is no longer needed because it was meant to add support for custom isolation levels. That did not help.
I have upgraded Hibernate from version 5.1.17 to version 5.2.18 and reimplemented an incompatible third party Hibernate user type that prevented the upgrade.
I have not seen a single failure since that upgrade but I will keep testing.
Update 2
The issue does not seem to persist even with Hibernate version 5.2.0 either.
The following bugfix mentioned in the changelog looks very much related to my problem:
[HHH-10649] - When 2LC enabled, flush session and then refresh entity cause dirty read in another session / transaction
If "2LC" means "second-level cache" then I have as mentioned before had the problem even after turning this off.

EntityManager.merge() is not being committed (Wildfly, JPA, JTA)

I can persist new data, but I cannot do updates. There are no errors, just no transactions committing the changes. I'm assuming this has something to do with the way that I've set up transactions. I'm trying a bunch of relatively new (to me) set of technologies. Below are the details.
I'm using the following tools/technologies:
Wildfly 8 and Java 7 (which is what my hosting service uses)
Annotations, with minimal XML being the goal
Struts 2.3 (using the convention plugin)
Spring 3.2
Hibernate 4.3
JTA (with container managed transactions (CMT))
JPA 2 (with a Container Managed Persistence Context)
EJBs (I have a remote client app that runs htmlunit tests)
Three WAR files and one EJB JAR file deployed
SpringBeanAutowiringInterceptor to autowire the EJBs (could there be an error in here where transactions don't commit?)
beanRefContext.xml (required by SpringBeanAutowiringInterceptor)
<beans>
<bean
class="org.springframework.context.support.ClassPathXmlApplicationContext">
<constructor-arg value="classpath:campaignerContext.xml" />
</bean>
</beans>
campaignerContext.xml
<beans>
<context:component-scan base-package="..." />
<jee:jndi-lookup id="dataSource" jndi-name="jdbc/CampaignerDS"/>
<tx:annotation-driven/>
<tx:jta-transaction-manager/>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="campaigner" />
</bean>
<bean id="ehCacheManager" class="net.sf.ehcache.CacheManager" factory-method="create">
<constructor-arg type="java.net.URL" value="classpath:/campaigner_ehcache.xml"/>
</bean>
</beans>
persistence.xml
<persistence>
<persistence-unit name="campaigner" transaction-type="JTA">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<jta-data-source>java:/jdbc/CampaignerDS</jta-data-source>
<class>....UserRegistration</class>
...
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" />
</properties>
</persistence-unit>
</persistence>
SecurityServiceBean.java
#EnableTransactionManagement
#TransactionManagement(value = TransactionManagementType.CONTAINER)
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
#Stateless
#Interceptors(SpringBeanAutowiringInterceptor.class)
#DeclareRoles("Security Admin")
public class SecurityServiceBean extends AbstractCampaignerServiceImpl implements
SecurityServiceLocal, SecurityServiceRemote
{
#Override
#PermitAll
#Transactional(propagation = Propagation.REQUIRES_NEW)
public UserRegistration confirmRegistration(
String confirmationCode) throws ApplicationException
{
UserRegistration userRegistration = this.userRegistrationDAO
.find(new UserRegistrationQuery(null, confirmationCode)).uniqueResult(); // Should be attached now
...
userRegistration.setConfirmationDate(new Date());
userRegistration.setState(State.CONFIRMED);
userRegistration = this.userRegistrationDAO.saveOrUpdate(userRegistration);
...
}
}
UserRegistrationDAO.java
#Override
public UserRegistration saveOrUpdate(
UserRegistration obj) throws DAOException
{
log.debug("[saveOrUpdate] isJoinedToTransaction? "
+ (this.em.isJoinedToTransaction() ? "Y " : "N"));
try
{
if (obj.getId() == null)
{
this.em.persist(obj);
log.debug("[saveOrUpdate] called persist()");
return obj;
}
else
{
UserRegistration attached = this.em.merge(obj);
log.debug("[saveOrUpdate] called merge()");
return attached;
}
}
catch (PersistenceException e)
{
throw new DAOException("[saveOrUpdate] obj=" + obj.toString() + ",msg=" + e.getMessage(), e);
}
}
Are there any settings in Wildfly's standalone.xml that you need to see or that I should be setting?
BTW, this is incredibly annoying and frustrating. This should be an easy one-time setup that I can do and then forget about as I move on to creating my website, which should be where most of my time is spent. The lack of comprehensive documentation anywhere is AMAZING. Right now, development has been halted until this is solved
/rant
UPDATES
I tried switching to an XA data source, because some sites claimed that was necessary, but that didn't work (didn't think so but had to try). Also tried configuring emf with dataSource instead of persistenceUnitName as some other sites have. No joy.
I tried replacing the transactionManager with JpaTransactionManager, but that just led to this exception: A JTA EntityManager cannot use getTransaction()
The answer, thanks to M. Deinum, is that I was using the wrong #Transactional. I should have been using javax.transaction.Transactional but was using the Spring one instead. Note that the correct one will look like "#Transactional(TxType.REQUIRES_NEW)" instead of "#Transactional(propagation = Propagation.REQUIRES_NEW)"

Sync problems between entity object on Hibernate 3 and struts

I work at a complex web application that use EJB and Hibernate on JBoss. I use singleton EntityManagerFactory, and share it between all running process using Entity Manager instance.
The problem occurs when in a struct action is called an update on entity, and before action ends another process read and update same object.
I have that second process (it's a web service called from third-parts) read an old value and not the updated one in the action.
I know that data become persistent on database only after action end its work and control come back to user. Unfortunately this action after Entity Manager merge execution, it must call a web service that sometimes return after 10s . Meanwhile other process have wrong value if read this object.
I need that merge in first process become instantly persistent, or, I need that other process read right value.
I don't know if the second level cache is working and has effect in this scenario.
A solution is to make an update using JDBC instead Hibernate, but I would like a clean solution to do it.
a brief outline
t0 = start action ;t1= action find and merge entity; t2= start call to web service; t6= web service return; tend = end action ;
t3= start second process ; t4= find and merge entity; t5=end second process
t0 t1 t2 t3 t4 t5 t6 tend
|---------------|--------|------------------------------|-------|
|-----|----|
I need that at t3 the value read is that one merged at t2.
This is my persistence.xml
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="ApplicationWeb_EJB" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/ds/VisiaIntegrazioneDS</jta-data-source>
<class>.....entity.ApplicationServer</class>
....
<class>.....entity.Devices</class>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"/>
<!-- Caching properties -->
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<!--<property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider" />-->
<property name="net.sf.ehcache.configurationResourceName" value="ehcache.xml"/>
<!--<property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/>-->
<!--<property name="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider" />-->
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.region.factory_class" value="net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory"/>
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.max_fetch_depth" value="4"/>
<!-- hibernate.generate_statistics a true produce informazioni su hibernate da loggare -->
<property name="hibernate.generate_statistics" value="true"/>
</properties>
</persistence-unit>
</persistence>
this is an example of Entity Manager update
EntityManager em = EntityMan.getEMF().createEntityManager();
try {
em.find(Devices.class, device.getId());
em.merge(device);
em.flush();
} catch (Exception e) {
logger.debug(e.getMessage());
e.printStackTrace();
}
I am trying to figure out the whole process, so the statements I am reporting below are probably based on a wrong picture.
If I understood correctly, you would have problem even if the web service took 10ms. A request could come in the middle in any case.
But, am I wrong in saying that are you creating the entity manager, rather having injected by the container ? If you share the same manager within all your singleton methods, you can have more control on the cache concurrency access.
Second, if the call to the web service is not mandatory for the final value, is there a particular reason why you are not invoking the web service asynchronously, with a message driven bean or by using #Asynchronous annotation ?
[UPDATE] You cannot have a real time concurrent access application that rely on a 10sec WS response.
For example, if the last step WS call fail, what do you do ? Rollback ? So, what if the new incoming client has read uncommitted data that you later are going to roll back ?
As some other have probably said, make the last WS call asynchronous, or better delegate to a message driven bean (which has the advantage of retry the WS call in case of failure). In either way the call is returned immediately to the client.
But I am quite confident that you will have the same problem again. If I understood the architecture correctly I would revisit the design to do this things:
Invoke the WS asynchronously or with MDB in order to return call to the client
Make calls to the entity manager, at least the one that insist on the same table, to be thread safe. Should be easy, because you have a singleton class

Spring transaction doesn't rollback when switching to JDBCTemplate programmatically

I have this use case in which I need to get the data from one Oracle schema and insert them to another schema, table by table. For reading and writing I use different datasources through JDBCTemplate. The switching between them is done within the code. Additionally I have a Hibernate connection, that I use to read data from configuration tables. This is also my default connection, the one that is set through autowiring when the application starts. I am using Spring 4, Hibernate 4.3 and Oracle 11.
For the JDBCTemplate I have an abstract class that holds the JDBCTemplate, like this:
public abstract class GenericDao implements SystemChangedListener {
private NamedParameterJdbcTemplate jdbcTemplate;
/**
* Initializing the bean with the definition data source through #Autowired
* #param definitionDataSource as instance of #DataSource
*/
#Autowired
private void setDataSource(DataSource definitionDataSource) {
this.jdbcTemplate = new NamedParameterJdbcTemplate(definitionDataSource);
}
public NamedParameterJdbcTemplate getNamedParameterJdbcTemplate(){
return this.jdbcTemplate;
}
#Override
public void updateDataSource(DataSource dataSource) {
this.setDataSource(dataSource);
}
}
The interface SystemChangedListener defines the updateDataSource method which is called, when the DataSource is switched through a Service method, like this:
public class SystemServiceImpl implements SystemService, SystemChangable {
private List<GenericDao> daoList;
#Autowired
public void setDaoList(final List<GenericDao> daoList){
this.daoList = daoList;
}
#Override
public void notifyDaos(SystemDTO activeSystem) {
logger.debug("Notifying DAO of change in datasource...");
for(GenericDao dao : this.daoList){
dao.updateDataSource(activeSystem.getDataSource());
}
logger.debug("...done.");
}
#Override
public Boolean switchSystem(final SystemDTO toSystem) {
logger.info("Switching active system...");
notifyDaos(toSystem);
logger.info("Active system and datasource switched to: " + toSystem.getName());
return true;
}
}
The switching works perfectly for reading so far. I can switch between schemas with no problem, but if for some reason during the copying I get an exception the transaction doesn't get rolled back.
This is my copyint method:
#Transactional(rollbackFor = RuntimeException.class, propagation=Propagation.REQUIRED)
public void replicateSystem(String fromSystem, String toSystem) throws ApplicationException {
// FIXME: pass the user as information
// TODO: actually the method should take some model from the view and transform it in DTOs and stuff
StringBuffer protocolMessageBuf = new StringBuffer();
ReplicationProtocolEntryDTO replicationDTO = new ReplicationProtocolEntryDTO();
String userName = "xxx";
Date startTimeStamp = new Date();
try {
replicationStatusService.markRunningReplication();
List<ManagedTableReplicationDTO> replications = retrieveActiveManageTableReplications(fromSystem, toSystem);
protocolMessageBuf.append("Table count: ");
protocolMessageBuf.append(replications.size());
protocolMessageBuf.append(". ");
for (ManagedTableReplicationDTO repDTO : replications) {
protocolMessageBuf.append(repDTO.getTableToReplicate());
protocolMessageBuf.append(": ");
logger.info("Switching to source system: " + repDTO.getSourceSystem());
SystemDTO system = systemService.retrieveSystem(repDTO.getSourceSystem());
systemService.switchSystem(system);
ManagedTableDTO managedTable = managedTableService.retrieveAllManagedTableData(repDTO.getTableToReplicate());
protocolMessageBuf.append(managedTable.getRows() != null ? managedTable.getRows().size() : null);
protocolMessageBuf.append("; ");
ManagedTableUtils managedTableUtils = new ManagedTableUtils();
List<String> inserts = managedTableUtils.createTableInserts(managedTable);
logger.info("Switching to target system: " + repDTO.getSourceSystem());
SystemDTO targetSystem = systemService.retrieveSystem(repDTO.getTargetSystem());
systemService.switchSystem(targetSystem);
// TODO: what about constraints? foreign keys
logger.info("Cleaning up data in target table: " + repDTO.getTargetSystem());
managedTableService.cleanData(repDTO.getTableToReplicate());
/*
managedTableDao.deleteContents(repDTO.getTableToReplicate());
*/
// importing the data
managedTableService.importData(inserts);
/*
for (String insrt : inserts) {
managedTableDao.executeSqlInsert(insrt);
}
*/
protocolMessageBuf.append("Replication successful.");
}
} catch (ApplicationException ae) {
protocolMessageBuf.append("ERROR: ");
protocolMessageBuf.append(ae.getMessage());
throw new RuntimeException("Error replicating a table. Rollback.");
} finally {
replicationDTO = this.prepareProtocolRecord(userName, startTimeStamp, protocolMessageBuf.toString(), fromSystem, toSystem);
replicationProtocolService.writeProtocolEntry(replicationDTO);
replicationStatusService.markFinishedReplication();
}
}
What I do is, I retrieve a list with tables whose content should be copied and in a loop, generate insert statements for them, delete the contents of the target table and execute the inserts with
public void executeSqlInsert(String insert) throws DataAccessException {
getNamedParameterJdbcTemplate().getJdbcOperations().execute(insert);
}
In this the correct DataSource is used - the DataSource of the target system. When, for instance there's an SQLException somwhere during insertion of the data, the deleting of the data is still committed and the data of the target table get lost. I have no problem with getting exceptions. In fact this is part of the requirement - all the exceptions should get protocolled and the whole copying process must be rolled back if there are exceptions.
Here's my db.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.0.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/data/jpa http://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<!-- Scans within the base package of the application for #Components to configure as beans -->
<bean id="placeholderConfig"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value="classpath:/db.properties" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="de.telekom.cldb.admin"
p:dataSource-ref="dataSource"
p:jpaPropertyMap-ref="jpaPropertyMap"
p:jpaVendorAdapter-ref="hibernateVendor" />
<bean id="hibernateVendor" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />
<property name="databasePlatform" value="${db.dialect}" />
</bean>
<!-- system 'definition' data source -->
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close"
p:driverClassName="${db.driver}"
p:url="${db.url}"
p:username="${db.username}"
p:password="${db.password}" />
<!--
p:maxActive="${dbcp.maxActive}"
p:maxIdle="${dbcp.maxIdle}"
p:maxWait="${dbcp.maxWait}"/>
-->
<util:map id="jpaPropertyMap">
<entry key="generateDdl" value="false"/>
<entry key="hibernate.hbm2ddl.auto" value="validate"/>
<entry key="hibernate.dialect" value="${db.dialect}"/>
<entry key="hibernate.default_schema" value="${db.schema}"/>
<entry key="hibernate.format_sql" value="false"/>
<entry key="hibernate.show_sql" value="true"/>
</util:map>
<tx:annotation-driven transaction-manager="transactionManager" />
<!-- supports both JDBCTemplate connections and JPA -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
So my problem is that the transaction isn't rolled back. And also, I don't see any clues in the log file, that a trnsaction is started at all. What am I doing wrong?
Thank you for the help!
al
As I said in my comment, by default,spring framework mark a transaction for rollback in the case of runtime i.e. unchecked exceptions (any exception that is an subclass of RuntimeException also included in this). On other hand, Checked exceptions that are generated from a transactional method will not trigger auto transaction rollback.
Why? It's simple, As we learned checked exceptions are necessary(must) for handling or throwing out. so as you did, throwing the checked exception out of the transactional method will tell spring framework that (this thrown exception is occurred and) you know what you're doing, resulting framework skip rollback part. In case of unchecked exception it's considered as a bug or a bad exception handling, so transaction is rolled back to avoid data corruption.
According to your code of replicateSystem method where you have have checked for ApplicationException, ApplicationException do not trigger automatic rollback. because when the exception is occur the client (application) has an opportunity to recover.
According to Docs application exceptions are that do not extend RuntimeException.
As per my knowledge in EJB we can use #ApplicationException(rollback=true) if there is need to transaction to be rolled back automatically.
I'm not sure? but I think the problem in this point
// TODO: what about constraints? foreign keys
logger.info("Cleaning up data in target table: " + repDTO.getTargetSystem());
managedTableService.cleanData(repDTO.getTableToReplicate());
If the clearing of tables goes throw trunc some_table then at this point Oracle commit transaction.

Call stored procedure on mysql slave using ReplicationDriver

I have a Spring application that currently executes some queries utilizing stored procedures. The configuration is something like this:
Datasource:
<bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.ReplicationDriver"/>
<property name="url" value="jdbc:mysql:replication://master,slave1,slave2/db?allowMultiQueries=true"/>
<property name="username" value="${db.dbusername}"/>
<property name="password" value="${db.dbpassword}"/>
<property name="defaultReadOnly" value="true"/>
</bean>
<bean id="jdbcDeviceDAO" class="dao.jdbc.JdbcDeviceDAO">
<property name="dataSource" ref="dataSource"/>
</bean>
DAO:
public class JdbcDeviceDAO implements DeviceDAO {
// ...
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
this.procGetCombinedDeviceRouting = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName("get_combined_device_routing");
// ...
}
public CombinedDeviceRouting getCombinedDeviceRouting(String deviceName, String deviceNameType) {
SqlParameterSource in = createParameters(deviceName, deviceNameType);
Map<String, Object> results = this.procGetCombinedDeviceRouting.execute(in);
return extractResults(results);
}
Now when I call getCombinedDeviceRouting(...) it fails with the following exception:
org.springframework.dao.TransientDataAccessResourceException: CallableStatementCallback; SQL [{call get_combined_device_routing()}]; Connection is read-only. Queries leading to data modification are not allowed; nested exception is java.sql.SQLException: Connection is read-only. Queries leading to data modification are not allowed
I know the connection is read-only and I need it to be that way so the queries are load-balanced between slave hosts. But the stored procedure is actually read only, it's just a lot of SELECT statements, in fact I tried adding READS SQL DATA to its definition but it didn't work.
Finally I came to the point of reading the mysql's connector code and I found this:
protected boolean checkReadOnlySafeStatement() throws SQLException {
synchronized (checkClosed().getConnectionMutex()) {
return this.firstCharOfStmt == 'S' || !this.connection.isReadOnly();
}
}
It sounds naive, but is the connector checking whether my statement is read-only by just matching the first character with 'S'?
If this is the case, it seems like there's no way of calling a stored procedure on a slave host, because the statement starts with 'C' (CALL ...).
Does anyone know if there's a workaround for this problem? Or maybe I'm wrong assuming this first character check?
It appears as though this is a bug with the driver I had a look at the code to see if there is an easy extension point, but it looks like you'd have to extend a lot of classes to affect this behaviour :(

Categories