Sync problems between entity object on Hibernate 3 and struts - java

I work at a complex web application that use EJB and Hibernate on JBoss. I use singleton EntityManagerFactory, and share it between all running process using Entity Manager instance.
The problem occurs when in a struct action is called an update on entity, and before action ends another process read and update same object.
I have that second process (it's a web service called from third-parts) read an old value and not the updated one in the action.
I know that data become persistent on database only after action end its work and control come back to user. Unfortunately this action after Entity Manager merge execution, it must call a web service that sometimes return after 10s . Meanwhile other process have wrong value if read this object.
I need that merge in first process become instantly persistent, or, I need that other process read right value.
I don't know if the second level cache is working and has effect in this scenario.
A solution is to make an update using JDBC instead Hibernate, but I would like a clean solution to do it.
a brief outline
t0 = start action ;t1= action find and merge entity; t2= start call to web service; t6= web service return; tend = end action ;
t3= start second process ; t4= find and merge entity; t5=end second process
t0 t1 t2 t3 t4 t5 t6 tend
|---------------|--------|------------------------------|-------|
|-----|----|
I need that at t3 the value read is that one merged at t2.
This is my persistence.xml
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="ApplicationWeb_EJB" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/ds/VisiaIntegrazioneDS</jta-data-source>
<class>.....entity.ApplicationServer</class>
....
<class>.....entity.Devices</class>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"/>
<!-- Caching properties -->
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<!--<property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider" />-->
<property name="net.sf.ehcache.configurationResourceName" value="ehcache.xml"/>
<!--<property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/>-->
<!--<property name="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider" />-->
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.region.factory_class" value="net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory"/>
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.max_fetch_depth" value="4"/>
<!-- hibernate.generate_statistics a true produce informazioni su hibernate da loggare -->
<property name="hibernate.generate_statistics" value="true"/>
</properties>
</persistence-unit>
</persistence>
this is an example of Entity Manager update
EntityManager em = EntityMan.getEMF().createEntityManager();
try {
em.find(Devices.class, device.getId());
em.merge(device);
em.flush();
} catch (Exception e) {
logger.debug(e.getMessage());
e.printStackTrace();
}

I am trying to figure out the whole process, so the statements I am reporting below are probably based on a wrong picture.
If I understood correctly, you would have problem even if the web service took 10ms. A request could come in the middle in any case.
But, am I wrong in saying that are you creating the entity manager, rather having injected by the container ? If you share the same manager within all your singleton methods, you can have more control on the cache concurrency access.
Second, if the call to the web service is not mandatory for the final value, is there a particular reason why you are not invoking the web service asynchronously, with a message driven bean or by using #Asynchronous annotation ?
[UPDATE] You cannot have a real time concurrent access application that rely on a 10sec WS response.
For example, if the last step WS call fail, what do you do ? Rollback ? So, what if the new incoming client has read uncommitted data that you later are going to roll back ?
As some other have probably said, make the last WS call asynchronous, or better delegate to a message driven bean (which has the advantage of retry the WS call in case of failure). In either way the call is returned immediately to the client.
But I am quite confident that you will have the same problem again. If I understood the architecture correctly I would revisit the design to do this things:
Invoke the WS asynchronously or with MDB in order to return call to the client
Make calls to the entity manager, at least the one that insist on the same table, to be thread safe. Should be easy, because you have a singleton class

Related

Exception using EntityManager and Glassfish v3 - IllegalStateException: Attempting to execute an operation on a closed EntityManager

I work in a project that uses JavaEE. I work with Glassfish servers version 3. I'm frequently having a problem (not always) in my singleton EJB's that use an EntityManager instance. A lot of times, I get this error:
[timestamp] [http-thread-pool-8080(49)] ERROR com.sun.xml.ws.server.sei.TieHandler.serializeResponse Attempting to execute an operation on a closed EntityManager.
java.lang.IllegalStateException: Attempting to execute an operation on a closed EntityManager.
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.verifyOpen(EntityManagerImpl.java:1662) ~[org.eclipse.persistence.jpa.jar:2.3.4.v20130626-0ab9c4c]
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:643) ~[org.eclipse.persistence.jpa.jar:2.3.4.v20130626-0ab9c4c]
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:532) ~[org.eclipse.persistence.jpa.jar:2.3.4.v20130626-0ab9c4c]
at com.sun.enterprise.container.common.impl.EntityManagerWrapper.find(EntityManagerWrapper.java:320) ~[container-common.jar:3.1.2.1]
The log goes on, but I just showed the top of it. The next line of the log was the call to an WebService deployed in the same server. And when this error happens, it is always originated by a call to a WebService deployed in the same server that performs a find in the database using the method 'find' from the entityManager instance.
The entity manager is being injected outside of the bean in the #PostConstruct of a #WebService annotated class, using the line '(EntityManager)new InitialContext().lookup("java:comp/env/persistence/etc");' This is the class that receives all the incoming requests and decides which bean should be called, based on request.
Right after receiving a request, this class calls the respective singleton bean, based on the request, passing the injected EntityManager to the respective bean.
I understand that the EntityManager is closed when I try to perform the operation and that is indeed the problem. However, I thought this opening and closing of the EntityManager was managed automatically. Apparently it doesn't work that way. I'm not closing the EntityManager directly anywhere in the code either.
I'm not seeing any reasonable solution to approach this problem. All I find in online resources is that it may be a Glassfish bug and restart the server generally works. Nothing concrete to solve the problem.
Some of the information present in the PersistenceUnit configured in the persistence.xml file I'm using is presented below.
<persistence-unit> name="XXX" transaction-type="JTA"
<provider>org.eclipse.persistence.jpa.PersistenceProvider></provider>
<jta-data-source>jdbc/YYY</jta-data-source>
<properties>
<property name="eclipselink.target-database" value="Oracle"/>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.size.default" value="0"/>
<property name="eclipselink.cache.type.default" value="None"/>
<property name="eclipselink.weaving.internal" value="false"/>
<property name="toplink.target-database" value="Oracle"/>
<property name="eclipselink.session.customizer"
value="aaa.bbb.ccc.IsolateEmbeddablesCustomizer"/>
</properties>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
</persistence-unit>
Do you have any idea on how to solve this problem and did any of you also got the same error?
Thank you.
I think the the problem lies just here:
The entity manager is being injected outside of the bean in the
#PostConstruct of a #WebService annotated class, using the line
'(EntityManager)new
InitialContext().lookup("java:comp/env/persistence/etc");' ...
Container provides EntityManagers from some pool it maintains. EMs have some lifecycle and they can become invalid at some point of time. Normally (explained a bit later) this would not be a problem since usually EMs are injected in a managed way.
But you initialize it once in #PostConstruct to some variable and when EM pointed by that variable gets invalid it is not re-initialized or so because it is no managed by container the way meant to be.
I think you can get around if this problem by just checking if EM is invalid and doing the lookup again if reference is not valid. But I stress: that is not the correct way.
passing the injected
EntityManager to the respective bean.
Do not pass the EM. Initialize EM in the bean that uses it. You will not have any savings by passing the one and only instance. On the contrary it can make the performance worse. Let the container handle the optimization of creating of EMs.
So , normally you would be doing something like this in your beans (not passing the EM but managing it on a bean):
#PersistenceContext(unitName="XXX") // "unitName" might not be needed
private EntityManeger em; // if you use only one persistence unit
to make EM managed correctly. That way container makes sure it is always valid.
If you have correctly constructed beans that you let container to initialize by for example #Injecting those to your #WebService this should be possible.
If for some reason it is not possible then you might still do a JNDI lookup in the bean but then I am quite sceptic about functionality of any JTA transactions or so.

Inject hibernate PersistenceUnit

By using CDI as shown in the next code:
#PersistenceUnit
EntityManagerFactory emf;
I want to inject my hibernate EntityManagerFactory
Currently if I execute the next line:
EntityManagerFactory emf = Persistence.createEntityManagerFactory("HibernatePersistanceProv");
It works just as expected, but if I do it using the first method it tries to use the Derby connection I know this because I get the next error message:
org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
By the stack trace I know that it is caused because of this.
Error connecting to server localhost on port 1527 with message Connection refused.
Which I know it is because it is trying to connect to the (Java DB) Derby db.
My persistance.xml looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="HibernatePersistanceProv" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="hibernate.connection.username" value="root"/>
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/aschema"/>
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<property name="hibernate.connection.username" value="root"/>
<property name="hibernate.connection.password" value=""/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.format_sql" value="false"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/>
<property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/aschema"/>
<property name="javax.persistence.jdbc.user" value="root"/>
<property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>
</properties>
</persistence-unit>
</persistence>
I was reading that apparently I need to specify a standalone.xml to provide a different <jta-data-source> but it seams to me a bit more complicated than what it should be (I do not want to chase the wrong rabbit), I have been from the Java EE world for a while so I consider my self as brand new to this (for dummies explanations are widely appreciated).
(If it helps) I am running on a GlassFish 4.1 server. Please ask if any other information is required.
I think your problem is database related but for injecting EntityManager I usually go with this:
public class EntityManagerFactoryProducer {
#Produces
#ApplicationScoped
public EntityManagerFactory create() {
return Persistence.createEntityManagerFactory("HibernatePersistanceProv");
}
public void destroy(#Disposes EntityManagerFactory factory) {
factory.close();
}
}
public class EntityManagerProducer {
#Inject
private EntityManagerFactory emf;
#Produces
#RequestScoped
public EntityManager create() {
return emf.createEntityManager();
}
public void destroy(#Disposes EntityManager em) {
em.close();
}
}
Than just simply inject it where ever you want. If you have more database use qualifier combined with inject.
#Inject
private EntityManager entityManager;
The solution end up being that I was not managing properly my connections pools in Glassfish, in order to achieve this behavior (at least this is the way I found, but I am pretty sure they should be more) you need to:
Glassfish side:
In the "Common Tasks" Panel (left side of the administrator console (Glasfish4) expand JDBC.
Select JDBC Connection Pools and click the New button on top of the main (central) panel, proceed to configure the data base connection for the pool.
Now in the same JDBC section previously mentioned (left panel) select JDBC Resources (should be immediately above JDBC Connection Pools) there you can create a new resource so that you can use the CDI using the name of your choice OR as I did, just configure it in the jdbc/__default connection (as you may imagine that is the default connection provided by Glassfish CDI name space, to select your connection pool click on the link jdbc/__default on the table which appeared on the main (central) panel, that will take you into another form where you could use the drop down labeled as Pool Name: to select your newly configured connection pool, or the one of your choice; save it, top left of the main (central) panel.
Hibernate side:
In the persistence.xml, you can either:
a) Be sure that you are not providing any <jta-data-source> (IFF you configured in the jdbc/__default)
b) Provide the JNDI Name (usually the name you provided for your JDBC Resources (in case you created one) or in the case of the default connection (jdbc/__default) you can see the JDNI name in the edit view (which is: java:comp/DefaultDataSource).
Write that into <jta-data-source> in your persistence.xml and it should do the trick.
Sorry for the lack of graphical resources, I'll try to add them later. I hope it works for you "anonymous reader".
IMPORTANT NOTE I needed to switch back to Glassfish 4 (not 4.1) because Glassfish 4.1 currently (As of Jan `16) has a bug which does not allows you to create new connection pools.

hibernate doesn't issue update after flush

I'm using hibernate 3.2.7 (same problem on 3.2.5) with spring 3.0.1, all deployed on weblogic 10.3 and with an Oracle 10g database. I'm using JTA transaction management and the transaction is distributed (it is actually started and ended in another application, this code is just in between).
The configuration used by hibernate is declared in my persistence.xml and is the following:
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.WeblogicTransactionManagerLookup"/>
<property name="hibernate.query.factory_class" value="org.hibernate.hql.classic.ClassicQueryTranslatorFactory"/>
<property name="hibernate.current_session_context_class" value="jta"/>
<property name="hibernate.connection.release_mode" value="auto"/>
The spring configuration regarding the transaction manager is the following:
<!-- Instructs Spring to perfrom declarative transaction managemenet on annotated classes -->
<tx:annotation-driven transaction-manager="txManager" proxy-target-class="true"/>
<!-- Data about transact manager and session factory -->
<bean id="txManager" class="org.springframework.transaction.jta.WebLogicJtaTransactionManager">
<property name="transactionManagerName" value="javax.transaction.TransactionManager"/>
<property name="defaultTimeout" value="${app.transaction.timeOut}"/>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<!-- persistence unit is missing jta data source so that application server is not
creating EntitiyManagerFactory, spring will create its own LocalContainerEntityManagerFactoryBean overriding data source-->
<property name="dataSource" ref="myDataSource"/>
<!-- specific properties like jpa provider and jpa provider properties are in persistance unit -->
<property name="persistenceUnitName" value="my.persistence.unit"/>
</bean>
<!-- define data source in application server -->
<jee:jndi-lookup id="myDataSource" jndi-name="${db.jndiName}"/>
I'm using a generic CrudDao with an update method that looks like this:
public void update(Object entity) {
//entityManager injected by #PersistenceContext
entityManager.merge(entity);
entityManager.flush();
}
public Object getById(Object id, Class entityClass) throws PersistenceException{
return (Object)entityManager.find(entityClass, id);
}
UPDATED: added the getById method.
The code that does not work as expected looks like this:
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther);
// till now was null
myObj.setSomeDateAttr(someDate);
genericDao.update(myObj);
MyObject myObjFromDB = genericDao.getById(myObj.getId(), MyObject.class);
The result is that if I print myObj.getSomeDateAttr() it returns me the value of someDate, if I print myObjFromDB.getSomeDateAttr() it still has null.
I've tried changing the update method to:
org.hibernate.Session s = (org.hibernate.Session) entityManager.getDelegate();
s.evict(entity);
s.update(entity);
s.flush();
And it still doesn't work.
When turning on the show_sql flag of hibernate I don't see any update occurring when doing flush nor when I query the entity manager for the object with the same id. The selects are all visible.
UPDATE:
At the end of the transaction the update is actually called and everything is written to the db. So my problem is "just" during the transaction.
I'm afraid the problem may be linked with the configuration of the transaction manager on spring and on hibernate.
Hope that someone can help me as I have already lost a day and a half with no luck.
You need to look at the hibernate merge behaviour closely. As per documentation
if there is a persistent instance with the same identifier currently
associated with the session, copy the state of the given object onto
the persistent instance
if there is no persistent instance currently associated with the session, try to load it from the database, or create a new persistent instance
the persistent instance is returned
the given instance does not become associated with the session, it
remains detached
As per your statement on the sql queries in log, it look like
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther); returning the persistent object but when you modify it(becomes dirty) and call merge method, new state is copied to the current persistent object in session. If you see third point merge returns persistent object which is actually new manageable persistent object which you need to use in subsequent operations.
When you call find method hibernate returns the persistent object in session and not maneagable persistent object thats why you dont find the changes in object return by find.
To fix your problem change the reurn type of update method
public Object update(Object entity) {
//entityManager injected by #PersistenceContext
return entityManager.merge(entity);
}
and in service you need to use as below
MyObject myObj = getMyObjectThroughSomeOneToManyRelation(idOne, idOther);
// till now was null
myObj.setSomeDateAttr(someDate);
//You can use myObj as well instead myNewObj
MyObject myNewObj= genericDao.update(myObj);
//No need to call get
//MyObject myObjFromDB = genericDao.getById(myObj.getId(), MyObject.class);
System.out.println("Updated value:"+myNewObj.getSomeDateAttr());
Have a look at this artical as well.

Can't do DELETE in Java, JPA

I have the following code. The problem is that on the second line I get "org.apache.openjpa.persistence.TransactionRequiredException: Can only perform operation while a transaction is active"
The first line executed fine. What is my mistake?
//em is some EntityManager
String s = (String)em.createQuery("SELECT something FROM something WHERE something = something").getSingleResult();
em.createQuery("DELETE FROM something WHERE something = something").executeUpdate();
Read operations are handled differently from write operations in JPA. Write operations (be they create, updates or deletes) typically need to happen in the context of a transaction. The transaction boundary demarcates the calls you make to the session or entity manager, and defines when the transaction will be committed, (for example it could call commit on method call exit, when using container managed transactions).
For JPA, all calls to persist, remove, refresh and merge need to be done in a transaction. Query calls need to be performed in a transaction if they invoke executeUpdate. And calling getResultList() or getUniqueResult() needs to be done in the context of a transaction if lock mode is not LockMode.NONE.
Depending on your application needs you will use either container managed transactions (CMT), or bean managed transactions (BMT).
For CMT, make sure your persistence unit defines your datasource as JTA, and then annotate your class or method appropriately. For example:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="sample" transaction-type="JTA">
<jta-data-source>java:/DefaultDS</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/>
<property name="hibernate.hbm2ddl.auto" value="create-drop"/>
</properties>
</persistence-unit>
</persistence>
And then annotate your class/method with the appropriate transaction type:
# TransactionAttribute(TransactionAttributeType.REQUIRED)
public void doSomething() {
em.createQuery("DELETE FROM something WHERE something = something").executeUpdate();
}
If using BMT, then you have to explicitly manage the transactions:
public void doSomething() throws Exception {
em.getTransaction().begin();
try {
em.createQuery("DELETE FROM something WHERE something = something").executeUpdate();
} catch(Exception e) {
em.getTransaction().setRollbackOnly();
} finally {
em.getTransaction().commit();
}
}
You can only modify data in the database while a transaction is active. You start a transaction with
em.getTransaction().begin();
and end it successfully with
em.getTransaction().commit();
or end it rolling back the changes with
em.getTransaction().rollback();

Hibernate creates too many connections using #Transactional, how to prevent this?

I'm fairly new to Hibernate and PostgreSQL, but so far it's going well, although I'm running into a problem now that I can't solve. I'm getting an error while filling the database on the very first operation (which is one transaction inserting or updating 1000 rows in the database). The error is:
SQL Error: 0, SQLState: 53300
FATAL: sorry, too many clients already
Exception in thread "main" org.hibernate.exception.GenericJDBCException: Cannot open connection
This is the important code:
#Repository
public class PDBFinderDAO extends GenericDAO<PDBEntry> implements IPDBFinderDAO {
#Override
#Transactional
public void updatePDBEntry(Set<PDBEntry> pdbEntrySet) {
for (PDBEntry pdbEntry : pdbEntrySet) {
getCurrentSession().saveOrUpdate(pdbEntry);
}
}
}
getCurrentSession() is extended from GenericDAO and calls sessionFactory.getCurrentSession().
This is my Hibernate configuration:
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<!-- Database connection settings -->
<property name="hibernate.connection.driver_class">org.postgresql.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="hibernate.connection.url">jdbc:postgresql://localhost/PDBeter</property>
<property name="hibernate.connection.username">xxxx</property>
<property name="hibernate.connection.password">xxxx</property>
<!-- Create or update the database schema on startup -->
<property name="hbm2ddl.auto">create</property>
<!-- Use the C3P0 connection pool provider -->
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.idle_test_period">300</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<!-- Batch size -->
<property name="hibernate.jdbc.batch_size">50</property>
<!-- this makes sure the more efficient new id generators are being used,
though these are not backwards compatible with some older databases -->
<property name="hibernate.id.new_generator_mappings">true</property>
<!-- Echo all executed SQL to stdout -->
<!--
<property name="hibernate.show_sql">true</property>
-->
<property name="format_sql">true</property>
<property name="use_sql_comments">true</property>
</session-factory>
</hibernate-configuration>
This is my Spring configuration:
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd">
<context:component-scan base-package="nl.ru.cmbi.pdbeter" />
<!-- Transaction Manager -->
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<tx:annotation-driven />
<!-- Session Factory -->
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="configLocation" value="hibernate.cfg.xml" />
<property name="packagesToScan" value="nl.ru.cmbi.pdbeter.core.model.domain" />
</bean>
<!-- Task Executor -->
<task:annotation-driven />
</beans>
I'm not really sure what is going wrong, this should only open one connection every time, and close it afterwards, isn't that what #Transactional is supposed to do? Also, is there an easy way to check how many connections are open at a certain time so that I can check before and after the error how many connections were open?
EDIT: when I check the database nothing has been added, so it can't even make one connection, what is going wrong here?
EDIT: I'm sorry but I already solved it myself. It was a very stupid mistake, there was a very small query using criteria that was executed 1000 times as well, but that one was executed before the transaction, causing it to be executed in 1000 separate transactions/sessions/connections (I think, correct me if I'm wrong!)
EDIT: ok, turns out that didn't solve it at all, cause I needed that small query to see if something was already in the database, and if so, get that object from the database so I could update it's fields/columns/whatever you want to call it.
This is the method in the GenericDAO:
#Override
public PDBEntry findByAccessionCode(String accessionCode) {
return (PDBEntry) createCriteria(Restrictions.eq("accessionCode", accessionCode)).uniqueResult();
}
There is a function that builds the mapped objects that isn't in the DAO, since it converts a raw datafile into the database object, so I wanted to keep that out of database operations and only put the saveOrUpdate() within the database module. The problem I have now is that the findByAccessionCode() is being called a 1000 times during the conversion of the raw datafile to the database objects, because I need to check whether a certain piece of data is already present in the database, and if so, get the object from the database instead of making a new one.
Now how would I execute that query a 1000 times inside one connection in this context? I tried making that conversion method that converts the 1000 files #transactional, but that didn't work.
Here's the conversion method:
private void updatePDBSet(Set<RawPDBEntry> RawPDBEntrySet) {
Set<PDBEntry> pdbEntrySet = new LinkedHashSet<PDBEntry>();
for (RawPDBEntry pdb : RawPDBEntrySet) {
PDBEntry pdbEntry = pdbEntryDAO.findByAccessionCode(pdb.id);
if (pdbEntry == null) {
pdbEntry = new PDBEntry(pdb.id, pdb.header.date);
}
pdbEntry.setHeader(pdb.header.header);
ExpMethod expMethod = new ExpMethod.Builder(pdbEntry, pdb.expMethod.expMethod.toString()).build();
if (pdb.expMethod.resolution != null) {
expMethod.setResolution(pdb.expMethod.resolution);
}
if (pdb.expMethod.rFactor != null) {
expMethod.setRFactor(pdb.expMethod.rFactor.rFactor);
if (pdb.expMethod.rFactor.freeR != null) {
expMethod.setFreeR(pdb.expMethod.rFactor.freeR);
}
}
if (pdb.hetGroups != null) {
for (PFHetId hetId : pdb.hetGroups.hetIdList) {
HetGroup hetGroup = new HetGroup(pdbEntry, hetId.hetId);
if (hetId.nAtom != null) {
hetGroup.setNAtom(hetId.nAtom);
}
if (hetId.name != null) {
hetGroup.setName(hetId.name);
}
}
}
for (PFChain chain : pdb.chainList) {
new Chain(pdbEntry, chain.chain);
}
pdbEntrySet.add(pdbEntry);
}
pdbFinderDAO.updatePDBEntry(pdbEntrySet);
}
(The pdbFinderDAO.updatePDBEntry(pdbEntrySet) was where I originally thought the problem originated)
EDIT: First of all sorry that I created this new post, I really thought I found the answer, but I'll just continue in this post for further edits.
Ok, now I put all the 1000 findAccessionCode criteria inside the DAO by sending a Set of the raw data files to the DAO so it can retrieve the id's there, then finding them in the database, getting the ones it can find and adding it to a HashMap where the database object is mapped with the reference to the raw data file as key (so I know what raw data belongs to what database entry). This function I made #Transactional like so:
#Override
#Transactional
public Map<RawPDBEntry, PDBEntry> getRawPDBEntryToPDBEntryMap(Set<RawPDBEntry> rawPDBEntrySet) {
Map<RawPDBEntry, PDBEntry> RawPDBEntryToPDBEntryMap = new HashMap<RawPDBEntry, PDBEntry>();
for (RawPDBEntry pdb : rawPDBEntrySet) {
RawPDBEntryToPDBEntryMap.put(pdb, (PDBEntry) createCriteria(Restrictions.eq("accessionCode", pdb.id)).uniqueResult());
}
return RawPDBEntryToPDBEntryMap;
}
Still, no success... I get the exact same error, but it does tell me it's the criteria that causes it. Why can't I execute all these 1000 queries within the same connection?
EDIT: Yet another update: I tried adding all the queries 1 by 1, and this worked, slowly, but it worked. I did this on an empty database. Next I tried the same thing but now the database already contained the stuff from the first try, and I got the following error:
Exception in thread "main" org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I'm guessing this has something to do with the fact that I'm getting the objects (that are already in the database and so have to be updated) within the DAO, then sending references back to the conversion method, then changing their fields, then sending them back to the DAO to make them persistent. Although after googling a bit I found people that had problems with collections in their POJO's with the annotation:
#OneToMany(mappedBy = "pdbEntry", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
where the cascading caused the problem. Do I have to remove all of these cascades and hardcode the saveOrUpdate() operations for all the different mapped objects? Or has this nothing to do with that error?
And finally: I'm still no closer to figuring out how to do this for 1000 objects at a time.
Solved the problem, it had to do with a bad setup of Spring, which caused the #Transactional to not be recognized. I fixed that, and then the error went away.

Categories