Hibernate: get persisted state of dirty entity - java

Taking following code:
MyEntity e = dao.getEntity(1);
e.setProp1(someVal);
e.setProp2(otherVal);
MyEntity eOld = dao.getEntity(1);
If I do it like this then e will get updated (because Hibernate detected it is dirty) and eOld will have the same property values (prop1, prop2) a e. Is there a way to get the persisted state of this dirty entity (as it is in the database)?

Try:
<property name="defaultAutoCommit" value="false" />
Or alternative use detach and re-attach when ready to persist.
dao.detach(e);
...
e.setProp1("AnotherVal"); //not propatated to the database
dao.merge(cat); // update

Actually I may already have found the solution myself...
I had already tried evicting eOld but that doesn't make since, I need to evict e before retrieving eOld and after the compare (for auditing) reattach (merge) e to the session again.
It seems to work in any case.

Related

Hibernate Update Exception: a different object with the same identifier value was already associated with the session [duplicate]

I have two user Objects and while I try to save the object using
session.save(userObj);
I am getting the following error:
Caused by: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session:
[com.pojo.rtrequests.User#com.pojo.rtrequests.User#d079b40b]
I am creating the session using
BaseHibernateDAO dao = new BaseHibernateDAO();
rtsession = dao.getSession(userData.getRegion(),
BaseHibernateDAO.RTREQUESTS_DATABASE_NAME);
rttrans = rtsession.beginTransaction();
rttrans.begin();
rtsession.save(userObj1);
rtsession.save(userObj2);
rtsession.flush();
rttrans.commit();
rtsession.close(); // in finally block
I also tried doing the session.clear() before saving, still no luck.
This is for the first I am getting the session object when a user request comes, so I am getting why is saying that object is present in session.
Any suggestions?
I have had this error many times and it can be quite hard to track down...
Basically, what hibernate is saying is that you have two objects which have the same identifier (same primary key) but they are not the same object.
I would suggest you break down your code, i.e. comment out bits until the error goes away and then put the code back until it comes back and you should find the error.
It most often happens via cascading saves where there is a cascade save between object A and B, but object B has already been associated with the session but is not on the same instance of B as the one on A.
What primary key generator are you using?
The reason I ask is this error is related to how you're telling hibernate to ascertain the persistent state of an object (i.e. whether an object is persistent or not). The error could be happening because hibernate is trying to persist an object that is already persistent. In fact, if you use save hibernate will try and persist that object, and maybe there is already an object with that same primary key associated with the session.
Example
Assuming you have a hibernate class object for a table with 10 rows based on a primary key combination (column 1 and column 2). Now, you have removed 5 rows from the table at some point of time. Now, if you try to add the same 10 rows again, while hibernate tries to persist the objects in database, 5 rows which were already removed will be added without errors. Now the remaining 5 rows which are already existing, will throw this exception.
So the easy approach would be checking if you have updated/removed any value in a table which is part of something and later are you trying to insert the same objects again
This is only one point where hibernate makes more problems than it solves.
In my case there are many objects with the same identifier 0, because they are new and don't have one. The db generates them. Somewhere I have read that 0 signals Id not set. The intuitive way to persist them is iterating over them and saying hibernate to save the objects. But You can't do that - "Of course You should know that hibernate works this and that way, therefore You have to.."
So now I can try to change Ids to Long instead of long and look if it then works.
In the end it's easier to do it with a simple mapper by your own, because hibernate is just an additional intransparent burden.
Another example: Trying to read parameters from one database and persist them in another forces you to do nearly all work manually. But if you have to do it anyway, using hibernate is just additional work.
USe session.evict(object); The function of evict() method is used to remove instance from the session cache. So for first time saving the object ,save object by calling session.save(object) method before evicting the object from the cache. In the same way update object by calling session.saveOrUpdate(object) or session.update(object) before calling evict().
This can happen when you have used same session object for read & write. How?
Say you have created one session.
You read a record from employee table with primary key Emp_id=101
Now You have modified the record in Java.
And you are going to save the Employee record in database.
we have not closed session anywhere here.
As the object that was read also persist in the session. It conflicts with the object that we wish to write. Hence this error comes.
As somebody already pointed above i ran into this problem when i had cascade=all on both ends of a one-to-many relationship, so let's assume A --> B (one-to-many from A and many-to-one from B) and was updating instance of B in A and then calling saveOrUpdate(A) , it was resulting in a circular save request i.e save of A triggers save of B that triggers save of A... and in the third instance as the entity( of A) was tried to be added to the sessionPersistenceContext the duplicateObject exception was thrown. I could solve it by removing cascade from one end.
You can use session.merge(obj), if you are doing save with different sessions with same identifier persistent object.
It worked, I had same issue before.
I ran into this problem by:
Deleting an object (using HQL)
Immediately storing a new object with the same id
I resolved it by flushing the results after the delete, and clearing the cache before saving the new object
String delQuery = "DELETE FROM OasisNode";
session.createQuery( delQuery ).executeUpdate();
session.flush();
session.clear();
This problem occurs when we update the same object of session, which we have used to fetch the object from database.
You can use merge method of hibernate instead of update method.
e.g. First use session.get() and then you can use session.merge (object). This method will not create any problem. We can also use merge() method to update object in database.
I also ran into this problem and had a hard time to find the error.
The problem I had was the following:
The object has been read by a Dao with a different hibernate session.
To avoid this exception, simply re-read the object with the dao that is going to save/update this object later on.
so:
class A{
readFoo(){
someDaoA.read(myBadAssObject); //Different Session than in class B
}
}
class B{
saveFoo(){
someDaoB.read(myBadAssObjectAgain); //Different Session than in class A
[...]
myBadAssObjectAgain.fooValue = 'bar';
persist();
}
}
Hope that save some people a lot of time!
Get the object inside the session, here an example:
MyObject ob = null;
ob = (MyObject) session.get(MyObject.class, id);
By default is using the identity strategy but I fixed it by adding
#ID
#GeneratedValue(strategy = GenerationType.IDENTITY)
Are your Id mappings correct? If the database is responsible for creating the Id through an identifier, you need to map your userobject to that ..
Check if you forgot to put #GenerateValue for #Id column.
I had same problem with many to many relationship between Movie and Genre. The program threw
Hibernate Error: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session
error.
I found out later that I just have to make sure you have #GenerateValue to the GenreId get method.
I encountered this problem with deleting an object, neither evict nor clear helped.
/**
* Deletes the given entity, even if hibernate has an old reference to it.
* If the entity has already disappeared due to a db cascade then noop.
*/
public void delete(final Object entity) {
Object merged = null;
try {
merged = getSession().merge(entity);
}
catch (ObjectNotFoundException e) {
// disappeared already due to cascade
return;
}
getSession().delete(merged);
}
before the position where repetitive objects begin , you should close the session
and then you should start a new session
session.close();
session = HibernateUtil.getSessionFactory().openSession();
so in this way in one session there is not more than one entities that have the same identifier.
I had a similar problem. In my case I had forgotten to set the increment_by value in the database to be the same like the one used by the cache_size and allocationSize. (The arrows point to the mentioned attributes)
SQL:
CREATED 26.07.16
LAST_DDL_TIME 26.07.16
SEQUENCE_OWNER MY
SEQUENCE_NAME MY_ID_SEQ
MIN_VALUE 1
MAX_VALUE 9999999999999999999999999999
INCREMENT_BY 20 <-
CYCLE_FLAG N
ORDER_FLAG N
CACHE_SIZE 20 <-
LAST_NUMBER 180
Java:
#SequenceGenerator(name = "mySG", schema = "my",
sequenceName = "my_id_seq", allocationSize = 20 <-)
Late to the party, but may help for coming users -
I got this issue when i select a record using getsession() and again update another record with same identifier using same session causes the issue. Added code below.
Customer existingCustomer=getSession().get(Customer.class,1);
Customer customerFromUi;// This customer details comiong from UI with identifer 1
getSession().update(customerFromUi);// Here the issue comes
This should never be done . Solution is either evict session before update or change business logic.
just check the id whether it takes null or 0 like
if(offersubformtwo.getId()!=null && offersubformtwo.getId()!=0)
in add or update where the content are set from form to Pojo
I'm new to NHibernate, and my problem was that I used a different session to query my object than I did to save it. So the saving session didn't know about the object.
It seems obvious, but from reading the previous answers I was looking everywhere for 2 objects, not 2 sessions.
#GeneratedValue(strategy=GenerationType.IDENTITY), adding this annotation to the primary key property in your entity bean should solve this issue.
I resolved this problem .
Actually this is happening because we forgot implementation of Generator Type of PK property in the bean class. So make it any type like as
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private int id;
when we persist the objects of bean ,every object acquired same ID ,so first object is saved ,when another object to be persist then HIB FW through this type of Exception: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session.
The problem happens because in same hibernate session you are trying to save two objects with same identifier.There are two solutions:-
This is happening because you have not configured your mapping.xml file correctly for id fields as below:-
<id name="id">
<column name="id" sql-type="bigint" not-null="true"/>
<generator class="hibernateGeneratorClass"</generator>
</id>
Overload the getsession method to accept a Parameter like isSessionClear,
and clear the session before returning the current session like below
public static Session getSession(boolean isSessionClear) {
if (session.isOpen() && isSessionClear) {
session.clear();
return session;
} else if (session.isOpen()) {
return session;
} else {
return sessionFactory.openSession();
}
}
This will cause existing session objects to be cleared and even if hibernate doesn't generate a unique identifier ,assuming you have configured your database properly for a primary key using something like Auto_Increment,it should work for you.
Otherwise than what wbdarby said, it even can happen when an object is fetched by giving the identifier of the object to a HQL. In the case of trying to modify the object fields and save it back into DB(modification could be insert, delete or update) over the same session, this error will appear. Try clearing the hibernate session before saving your modified object or create a brand new session.
Hope i helped ;-)
I have the same error I was replacing my Set with a new one get from Jackson.
To solve this I keep the existing set, I remove from the old set the element unknown into the new list with retainAll.
Then I add the new ones with addAll.
this.oldSet.retainAll(newSet);
this.oldSet.addAll(newSet);
No need to have the Session and manipulate it.
Try this. The below worked for me!
In the hbm.xml file
We need to set the dynamic-update attribute of class tag to true:
<class dynamic-update="true">
Set the class attribute of the generator tag under unique column to identity:
<generator class="identity">
Note: Set the unique column to identity rather than assigned.
I just had the same problem .I solve it by adding this line:
#GeneratedValue(strategy=GenerationType.IDENTITY)
Another thing that worked for me was to make the instance variable Long in place of long
I had my primary key variable long id;
changing it to Long id; worked
All the best
You always can do a session flush.
Flush will synchronize the state of all your objects in session (please, someone correct me if i'm wrong), and maybe it would solve your problem in some cases.
Implementing your own equals and hashcode may help you too.
You can check your Cascade Settings. The Cascade settings on your models could be causing this. I removed Cascade Settings (Essentially not allowing Cascade Inserts/Updates) and this solved my problem
I found this error as well. What worked for me is to make sure that the primary key (that is auto-generated) is not a PDT (i.e. long, int, ect.), but an object (i.e. Long, Integer, etc.)
When you create your object to save it, make sure you pass null and not 0.

OpenJpa query caching is not refreshing in case of null value

I am facing some problem in OpenJpa second level caching. Most of times caching is working but in one particular case it is not working. Here is a scenario when it is not working,
When your code result null value then it store it into cache and then it never clear that value. Although it clears values only when query returns a value.
Here is code which I had written to get value from database,
List<PartnerapiworkflowEntity> partnerapiworkflowEntityList = null;
try {
partnerapiworkflowEntityList = entityManager.createQuery("select p from someentity p where p.id = :Id and p.name = :name and " +
"p.code = :Code and p.operationname = :operationName")
.setParameter("Id", Id)
.setParameter("name", name)
.setParameter("code", Code)
.setParameter("operationName", operationName).getResultList();//.getSingleResult();
if(partnerapiworkflowEntityList != null && partnerapiworkflowEntityList.size() > 0){
return Boolean.TRUE;
}
} catch (NoResultException ne) {
logger.severe("some logging info.");
}
finally {
// entityManager.detach(partnerapiworkflowEntity);
}
And here is a code which refresh cache.
try{
entityManager.flush();
entityManager.clear();
entityManager.getEntityManagerFactory().getCache().evictAll();
//((JpaEntityManager)entityManager.getDelegate()).getServerSession().getIdentityMapAccessor().invalidateAll();
entityManager.flush();
} catch (Exception e){
throw e;
}
And this is persistence.xml code
<property name="openjpa.jdbc.DBDictionary" value="mysql"/>
<property name="openjpa.DataCache" value="true(EnableStatistics=true, CacheSize=10000, SoftReferenceSize=0, EvictionSchedule='+10')"/>
<property name="openjpa.QueryCache" value="true(EvictPolicy='timestamp')"/>
<!--<property name="openjpa.jdbc.QuerySQLCache" value="true(EnableStatistics=true)"/>-->
<property name="javax.persistence.sharedCache.mode" value="ENABLE_SELECTIVE"/>
<property name="openjpa.Instrumentation" value="jmx(Instrument='DataCache,QueryCache,QuerySQLCache')"/>
<property name="openjpa.MetaDataRepository" value="Preload=true"/>
<property name="openjpa.Log" value="SQL=Trace" />
<property name="openjpa.ConnectionFactoryProperties" value="PrintParameters=true" />
Everything working fine when query always returns value. The problem is start when it return null value. Then first time is store in cache and then it never refresh.
I am using OpenJpa2 and Hibernate.
This issue was first observed with OpenJPA 2.2.2. Looking up online revealed that there was a defect that was fixed on trunk that was related to L2 cache (https://issues.apache.org/jira/browse/OPENJPA-2285)
But this problem is again found later in https://issues.apache.org/jira/browse/OPENJPA-2522
Solution:
So far it is not fixed yet. But they have given some bypassing solution.
Disable query cache
To disable the query cache (default), set the openjpa.QueryCache property to false:
<property name="openjpa.QueryCache" value="false"/>
By configuring sql query cache to false
To specify a custom cache class:
<property name="openjpa.jdbc.QuerySQLCache" value="com.mycompany.MyCustomCache"/>
To use an unmanaged cache:
<property name="openjpa.jdbc.QuerySQLCache" value="false"/>
OR
To use an unmanaged cache:
<property name="openjpa.jdbc.QuerySQLCache" value="all"/>
Open JPA - L2 Cache Issue and Workaround
This tutorial depicts your problem same to same. Here you can get the
clear conception of occuring this error.
It gives a solution that you must have to keep related data. So that NullPointerException will not arise. Data must be consistent until OpenJPA not solve the issue. :D
Several mechanisms are available to the application to bypass SQL
caching for a JPQL query.
A user application can disable Prepared SQL Cache for entire lifetime of a persistence context by invoking the following method on OpenJPA's EntityManager SPI interface:
OpenJPAEntityManagerSPI.setQuerySQLCache(boolean)
Plug-in property openjpa.jdbc.QuerySQLCache can be configured to
exclude certain JPQL queries as shown below.
<property name="openjpa.jdbc.QuerySQLCache" value="true(excludes='select c from Company c;select d from Department d')"/>
will never cache JPQL queries select c from Company c and select d from Department d.
Root Cause Analysis:
The query cache stores the object IDs that are returned by query executions. When you run a query, JPA assembles a key that is based on the query properties and the parameters that are used at launch time and checks for a cached query result. If one is found, the object IDs in the cached result are looked up, and the resulting persistence-capable objects are returned. Otherwise, the query is launched against the database and the object IDs that are loaded by the query are placed into the cache. The object ID list is not cached until the list that is returned at query launch time is fully traversed.
IBM Recommendation:
L2 caching increases the memory consumption of the application,
therefore, it is important to limit the size of the L2 cache. There is
also a possibility of stale data for updated objects in a clustered
environment. Configure L2 caching for read-mostly, infrequently
modified entities. L2 caches are not recommended for frequently and
concurrently updated entities.
Resource Link:
Open JPA 2.4.0 Caching Reference Guide
You are evicting entries, but what about query cache? It can be that in normal case evicting is noticed by query case, so the result is invalidated... It can explain why null fails here. Can you please confirm?
EDIT:
<property name="openjpa.QueryCache" value="false"/>
Means no query cache. My bad.
Other try - NULL check? You have query, with params - can you execute it directly on the database?

how to disable cache in eclipselink

I have tried disabling L2 cache in EclipseLink with Eclipse indigo by using following properties in persistence.xml:-
<property name="eclipselink.cache.shared.default" value="false"/>
<shared-cache-mode>NONE</shared-cache-mode>
Basically I am testing one scenario whether same object created in two different sessions is hitting database twice or both sessions are referring to same object created in earlier session in memory cache. It should not because L2 cache is disabled by mentioning above properties in persistence.xml
My code is as below:-
Session session = DataAccessManager.getManager().openSession();
ReferenceObjectRepository referenceObjectRepository = ReferenceObjectRepository.getInstance();
ReferenceObjectKey referenceObjectKey = new ReferenceObjectKey(getStringValue("testCacheByPass.input"));
//load object first time.
ReferenceObject referenceObject = referenceObjectRepository.load(ReferenceObject.class, referenceObjectKey);
logger.log(Level.SEVERE, "Cache ReferenceObject: " + referenceObject);
//load object in another session
Session sessionNew = DataAccessManager.getManager().openNewSession();
Object dbObject = referenceObjectRepository.load(ReferenceObject.class, referenceObjectKey);
logger.log(Level.SEVERE, "DB loaded ReferenceObject: " + dbObject);
Please help me whether I have missed something? or do I need to do it some other way??
Add this line in each function where the call is made. I use in the find function when consulted a view.
((JpaEntityManager)em.getDelegate()).getServerSession().getIdentityMapAccessor().invalidateAll();
This line clear the cache before run de query.
public Entity find(Object id) {
((JpaEntityManager)em.getDelegate()).getServerSession().getIdentityMapAccessor().invalidateAll();
return em.find(Entity.class, id);
}
You have disabled the object cache, but I think you still have query cache in play. You should be able to disable query cache too with
<property name="eclipselink.query-results-cache" value="false"/>
<property name="eclipselink.refresh" value="true"/>
Same thing can be set with query hints, too. You could also try using query hints if persistence.xml configuration doesn't seem to be working.
Also note that essentially, even without the caching, you'd be comparing the same object, so unless it is detached it should be the same.
Related questions:
Disable eclipselink caching and query caching - not working?
Disable caching in JPA (eclipselink)

Can Hibernate work with MySQL's "ON DUPLICATE KEY UPDATE" syntax?

MySQL supports an "INSERT ... ON DUPLICATE KEY UPDATE ..." syntax that allows you to "blindly" insert into the database, and fall back to updating the existing record if one exists.
This is helpful when you want quick transaction isolation and the values you want to update to depend on values already in the database.
As a contrived example, let's say you want to count the number of times a story is viewed on a blog. One way to do that with this syntax might be:
INSERT INTO story_count (id, view_count) VALUES (12345, 1)
ON DUPLICATE KEY UPDATE set view_count = view_count + 1
This will be more efficient and more effective than starting a transaction, and handling the inevitable exceptions that occur when new stories hit the front page.
How can we do the same, or accomplish the same goal, with Hibernate?
First, Hibernate's HQL parser will throw an exception because it does not understand the database-specific keywords. In fact, HQL doesn't like any explicit inserts unless it's an "INSERT ... SELECT ....".
Second, Hibernate limits SQL to selects only. Hibernate will throw an exception if you attempt to call session.createSQLQuery("sql").executeUpdate().
Third, Hibernate's saveOrUpdate does not fit the bill in this case. Your tests will pass, but then you'll get production failures if you have more than one visitor per second.
Do I really have to subvert Hibernate?
Have you looked at the Hibernate #SQLInsert Annotation?
#Entity
#Table(name="story_count")
#SQLInsert(sql="INSERT INTO story_count(id, view_count) VALUES (?, ?)
ON DUPLICATE KEY UPDATE view_count = view_count + 1" )
public class StoryCount
This is an old question, but I was having a similar issue and figured I would add to this topic. I needed to add a log to an existing StatelessSession audit log writer. The existing implementation was using a StatelessSession because the caching behavior of the standard session implementation was unnecessary overhead and we did not want our hibernate listeners to fire for audit log writing. This implementation was about achieving as high a write performance as possible with no interactions.
However, the new log type needed to use an insert-else-update type of behavior, where we intend to update existing log entries with a transaction time as a "flagging" type of behavior. In a StatelessSession, saveOrUpdate() is not offered so we needed to implement the insert-else-update manually.
In light of these requirements:
You can use the mysql "insert ... on duplicate key update" behavior via a custom sql-insert for the hibernate persistent object. You can define the custom sql-insert clause either via annotation (as in the above answer) or via a sql-insert entity a hibernate xml mapping, e.g.:
<class name="SearchAuditLog" table="search_audit_log" persister="com.marin.msdb.vo.SearchAuditLog$UpsertEntityPersister">
<composite-id name="LogKey" class="SearchAuditLog$LogKey">
<key-property
name="clientId"
column="client_id"
type="long"
/>
<key-property
name="objectType"
column="object_type"
type="int"
/>
<key-property
name="objectId"
column="object_id"
/>
</composite-id>
<property
name="transactionTime"
column="transaction_time"
type="timestamp"
not-null="true"
/>
<!-- the ordering of the properties is intentional and explicit in the upsert sql below -->
<sql-insert><![CDATA[
insert into search_audit_log (transaction_time, client_id, object_type, object_id)
values (?,?,?,?) ON DUPLICATE KEY UPDATE transaction_time=now()
]]>
</sql-insert>
The original poster asks about MySQL specifically. When I implemented the insert-else-update behavior with mysql I was getting exceptions when the 'update path' of the sql exectued. Specifically, mysql was reporting 2 rows were changed when only 1 row was updated (ostensibly because the existing row is delete and the new row is inserted). See this issue for more detail on that particular feature.
So when the update returned 2x the number of rows affected to hibernate, hibernate was throwing a BatchedTooManyRowsAffectedException, would roll back the transaction, and propogate the exception. Even if you were to catch the exception and handle it, the transaction had already been rolled back by that point.
After some digging I found that this was an issue with the entity persister that hibernate was using. In my case hibernate was using SingleTableEntityPersister, which defines an Expectation that the number of rows updated should match the number of rows defined in the batch operation.
The final tweak necessary to get this behavior to work was to define a custom persister (as shown in the above xml mapping). In this instance all we had to do was extend the SingleTableEntityPersister and 'override' the insert Expectation. E.g. I just tacked this static class onto the persistence object and define it as the custom persister in the hibernate mapping:
public static class UpsertEntityPersister extends SingleTableEntityPersister {
public UpsertEntityPersister(PersistentClass arg0, EntityRegionAccessStrategy arg1, SessionFactoryImplementor arg2, Mapping arg3) throws HibernateException {
super(arg0, arg1, arg2, arg3);
this.insertResultCheckStyles[0] = ExecuteUpdateResultCheckStyle.NONE;
}
}
It took quite a while digging through hibernate code to find this - I wasn't able to find any topics on the net with a solution to this.
If you are using Grails, I found this solution which did not require moving your Domain class into the JAVA world and using #SQLInsert annotations:
Create a custom Hibernate Configuration
Override the PersistentClass Map
Add your custom INSERT sql to the Persistent Classes you want using the ON DUPLICATE KEY.
For example, if you have a Domain object called Person and you want to INSERTS to be INSERT ON DUPLICATE KEY UPDATE you would create a configuration like so:
public class MyCustomConfiguration extends GrailsAnnotationConfiguration {
public MyCustomConfiguration() {
super();
classes = new HashMap<String, PersistentClass>() {
#Override
public PersistentClass put(String key, PersistentClass value) {
if (Person.class.getName().equalsIgnoreCase(key)) {
value.setCustomSQLInsert("insert into person (version, created_by_id, date_created, last_updated, name) values (?, ?, ?, ?, ?) on duplicate key update id=LAST_INSERT_ID(id)", true, ExecuteUpdateResultCheckStyle.COUNT);
}
return super.put(key, value);
}
};
}
and add this as your Hibernate Configuration in DataSource.groovy:
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
configClass = 'MyCustomConfiguration'
}
Just a note to be careful using LAST_INSERT_ID, as this will NOT be set correctly if the UPDATE is executed instead of the INSERT unless you set it explicitly in the statement, e.g. id=LAST_INSERT_ID(id). I haven't checked where GORM gets the ID from, but I'm assuming somewhere it is using LAST_INSERT_ID.
Hope this helps.

Hibernate cascade delete not working

I am having an issue with a delete I am trying to do in Hibernate. Everytime I try to delete I get an issue due to child records existing so it cannot delete the parent. I want to delete children and the parent. Here is my parent mapping:
<set name="communicationCountries" inverse="true" cascade="all,delete-orphan">
<key column="COM_ID" not-null="true" on-delete="cascade" />
<one-to-many class="com.fmr.fc.portlet.communications.vo.CommunicationCountry"/>
</set>
Here is the mapping for the child class:
<many-to-one name="communication" column="COM_ID" not-null="true" class="com.fmr.fc.portlet.communications.vo.Communication" cascade="all"/>
EDIT - When I do an insert the data is inserted into the parent and the child.
When I do an update using a new object with the ID of the object I want to modify the parent is updated but any existing children are added a second time. I cannot seem to remove children. When I retrieve an object using the ID and modify this I get an error telling me org.hibernate.LazyInitializationException: could not initialize proxy - the owning Session was closed. I suspect this is because in one getHibernateTemplate() call I am getting the object and I am saving it in another one and these are two different sessions?
When I do a delete I get an error because children exist. I know I am just doing something completley stupid due to lack of having a clue as to how this all works.
Here are my update and delete methods, in this case the update/save is retrieving and modifying before saving. The delete is using a new object with the same ID as the one in the DB I want to delete:
public void deleteCommunication(Communication comm) throws DataAccessException
{
getHibernateTemplate().delete(comm);
}
public void saveCommunication(Communication comm) throws DataAccessException
{
Communication existing = (Communication)getHibernateTemplate().load(Communication.class, comm.getComId());
existing.getCommunicationCountries().clear();
getHibernateTemplate().saveOrUpdate(existing);
}
UPDATE
So here are my new methods but still no joy. I think my issue has to do with the children not being loaded/initialized etc. With the delete though, I cant understand why the cascading delete isn't happening.
thanks so much for your help so far. I have reached my deadline for this work already so if I don't get it fixed over the weekend I am just going to have to resort to executing HQL queries as I know that will work for me :(
public void deleteCommunication(Integer id) throws DataAccessException
{
HibernateTemplate hibernate = getHibernateTemplate();
Communication existing = (Communication)hibernate.get(Communication.class, id);
hibernate.initialize( existing.getCommunicationCountries());
hibernate.delete(existing);
}
public void updateCommunication(Communication comm) throws DataAccessException
{
HibernateTemplate hibernate = getHibernateTemplate();
Communication existing = (Communication)hibernate.get(Communication.class, comm.getComId());
hibernate.initialize( existing.getCommunicationCountries());
existing.getCommunicationCountries().clear();
hibernate.saveOrUpdate(existing);
}
In no particular order:
A) Assuming "myID" in your code is your entity's identifier, you should be using session.get() instead of criteria - it's faster and most definitely easier:
MyObject obj = (MyObject) session.get(MyObject.class, new Long(1));
B) If you are using Spring (judging by getHibernateTemplate() call), you should use it consistently :-) and not resort to calling session directly unless absolutely necessary - and it's pretty much never necessary. The above get method would therefore become:
MyObject obj = (MyObject) getHibernateTemplate().get(MyObject.class, new Long(1));
If you need to write a criteria-based query you can use DetachedCriteria and HibernateTemplate.getByCriteria() method:
DetachedCriteria crit = DetachedCriteria.forClass(MyObject.class)
.add(Property.forName("myId").eq( new Long(1) ) );
List results = getHibernateTemplate().findByCriteria(crit);
C) You normally should not evict() object from session (doing it immediately before closing is rather pointless anyway). Nor should you normally close() the session you've obtained from HibernateTemplate.
D) Finally, as far as automatically saving children (one-to-many collection elements) goes - take a look at this example which provides a good explanation of different cascade settings. Post your mappings / code if you're still having problems.
Update (based on question clarifications):
1) Your mapping looks OK except for cascade on parent in child class (<many-to-one name="communication" cascade="all"/>). You most likely do not want this.
2) LazyInitializationException is thrown because Hibernate by default maps collections as lazy, meaning that children (communicationCountries) will not be loaded until first access. If that access happens when the session is already closed, exception is thrown. You can ensure that collection is populated by calling Hibernate.initialize() on collection.
3) Your delete() should work fine AS LONG AS you're calling it on entity instance returned by Hibernate rather than one you've created yourself (say, unmarshalled from remote call) for which "communicationCountries" collection is not populated. In order for Hibernate to delete children it needs to know they exist.
4) Your update(), on the other hand, is wrong. You're loading an entity, clearing its children, and saving it again - which is fine per se - but that has no connection to the parameter being passed in.

Categories