I have two user Objects and while I try to save the object using
session.save(userObj);
I am getting the following error:
Caused by: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session:
[com.pojo.rtrequests.User#com.pojo.rtrequests.User#d079b40b]
I am creating the session using
BaseHibernateDAO dao = new BaseHibernateDAO();
rtsession = dao.getSession(userData.getRegion(),
BaseHibernateDAO.RTREQUESTS_DATABASE_NAME);
rttrans = rtsession.beginTransaction();
rttrans.begin();
rtsession.save(userObj1);
rtsession.save(userObj2);
rtsession.flush();
rttrans.commit();
rtsession.close(); // in finally block
I also tried doing the session.clear() before saving, still no luck.
This is for the first I am getting the session object when a user request comes, so I am getting why is saying that object is present in session.
Any suggestions?
I have had this error many times and it can be quite hard to track down...
Basically, what hibernate is saying is that you have two objects which have the same identifier (same primary key) but they are not the same object.
I would suggest you break down your code, i.e. comment out bits until the error goes away and then put the code back until it comes back and you should find the error.
It most often happens via cascading saves where there is a cascade save between object A and B, but object B has already been associated with the session but is not on the same instance of B as the one on A.
What primary key generator are you using?
The reason I ask is this error is related to how you're telling hibernate to ascertain the persistent state of an object (i.e. whether an object is persistent or not). The error could be happening because hibernate is trying to persist an object that is already persistent. In fact, if you use save hibernate will try and persist that object, and maybe there is already an object with that same primary key associated with the session.
Example
Assuming you have a hibernate class object for a table with 10 rows based on a primary key combination (column 1 and column 2). Now, you have removed 5 rows from the table at some point of time. Now, if you try to add the same 10 rows again, while hibernate tries to persist the objects in database, 5 rows which were already removed will be added without errors. Now the remaining 5 rows which are already existing, will throw this exception.
So the easy approach would be checking if you have updated/removed any value in a table which is part of something and later are you trying to insert the same objects again
This is only one point where hibernate makes more problems than it solves.
In my case there are many objects with the same identifier 0, because they are new and don't have one. The db generates them. Somewhere I have read that 0 signals Id not set. The intuitive way to persist them is iterating over them and saying hibernate to save the objects. But You can't do that - "Of course You should know that hibernate works this and that way, therefore You have to.."
So now I can try to change Ids to Long instead of long and look if it then works.
In the end it's easier to do it with a simple mapper by your own, because hibernate is just an additional intransparent burden.
Another example: Trying to read parameters from one database and persist them in another forces you to do nearly all work manually. But if you have to do it anyway, using hibernate is just additional work.
USe session.evict(object); The function of evict() method is used to remove instance from the session cache. So for first time saving the object ,save object by calling session.save(object) method before evicting the object from the cache. In the same way update object by calling session.saveOrUpdate(object) or session.update(object) before calling evict().
This can happen when you have used same session object for read & write. How?
Say you have created one session.
You read a record from employee table with primary key Emp_id=101
Now You have modified the record in Java.
And you are going to save the Employee record in database.
we have not closed session anywhere here.
As the object that was read also persist in the session. It conflicts with the object that we wish to write. Hence this error comes.
As somebody already pointed above i ran into this problem when i had cascade=all on both ends of a one-to-many relationship, so let's assume A --> B (one-to-many from A and many-to-one from B) and was updating instance of B in A and then calling saveOrUpdate(A) , it was resulting in a circular save request i.e save of A triggers save of B that triggers save of A... and in the third instance as the entity( of A) was tried to be added to the sessionPersistenceContext the duplicateObject exception was thrown. I could solve it by removing cascade from one end.
You can use session.merge(obj), if you are doing save with different sessions with same identifier persistent object.
It worked, I had same issue before.
I ran into this problem by:
Deleting an object (using HQL)
Immediately storing a new object with the same id
I resolved it by flushing the results after the delete, and clearing the cache before saving the new object
String delQuery = "DELETE FROM OasisNode";
session.createQuery( delQuery ).executeUpdate();
session.flush();
session.clear();
This problem occurs when we update the same object of session, which we have used to fetch the object from database.
You can use merge method of hibernate instead of update method.
e.g. First use session.get() and then you can use session.merge (object). This method will not create any problem. We can also use merge() method to update object in database.
I also ran into this problem and had a hard time to find the error.
The problem I had was the following:
The object has been read by a Dao with a different hibernate session.
To avoid this exception, simply re-read the object with the dao that is going to save/update this object later on.
so:
class A{
readFoo(){
someDaoA.read(myBadAssObject); //Different Session than in class B
}
}
class B{
saveFoo(){
someDaoB.read(myBadAssObjectAgain); //Different Session than in class A
[...]
myBadAssObjectAgain.fooValue = 'bar';
persist();
}
}
Hope that save some people a lot of time!
Get the object inside the session, here an example:
MyObject ob = null;
ob = (MyObject) session.get(MyObject.class, id);
By default is using the identity strategy but I fixed it by adding
#ID
#GeneratedValue(strategy = GenerationType.IDENTITY)
Are your Id mappings correct? If the database is responsible for creating the Id through an identifier, you need to map your userobject to that ..
Check if you forgot to put #GenerateValue for #Id column.
I had same problem with many to many relationship between Movie and Genre. The program threw
Hibernate Error: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session
error.
I found out later that I just have to make sure you have #GenerateValue to the GenreId get method.
I encountered this problem with deleting an object, neither evict nor clear helped.
/**
* Deletes the given entity, even if hibernate has an old reference to it.
* If the entity has already disappeared due to a db cascade then noop.
*/
public void delete(final Object entity) {
Object merged = null;
try {
merged = getSession().merge(entity);
}
catch (ObjectNotFoundException e) {
// disappeared already due to cascade
return;
}
getSession().delete(merged);
}
before the position where repetitive objects begin , you should close the session
and then you should start a new session
session.close();
session = HibernateUtil.getSessionFactory().openSession();
so in this way in one session there is not more than one entities that have the same identifier.
I had a similar problem. In my case I had forgotten to set the increment_by value in the database to be the same like the one used by the cache_size and allocationSize. (The arrows point to the mentioned attributes)
SQL:
CREATED 26.07.16
LAST_DDL_TIME 26.07.16
SEQUENCE_OWNER MY
SEQUENCE_NAME MY_ID_SEQ
MIN_VALUE 1
MAX_VALUE 9999999999999999999999999999
INCREMENT_BY 20 <-
CYCLE_FLAG N
ORDER_FLAG N
CACHE_SIZE 20 <-
LAST_NUMBER 180
Java:
#SequenceGenerator(name = "mySG", schema = "my",
sequenceName = "my_id_seq", allocationSize = 20 <-)
Late to the party, but may help for coming users -
I got this issue when i select a record using getsession() and again update another record with same identifier using same session causes the issue. Added code below.
Customer existingCustomer=getSession().get(Customer.class,1);
Customer customerFromUi;// This customer details comiong from UI with identifer 1
getSession().update(customerFromUi);// Here the issue comes
This should never be done . Solution is either evict session before update or change business logic.
just check the id whether it takes null or 0 like
if(offersubformtwo.getId()!=null && offersubformtwo.getId()!=0)
in add or update where the content are set from form to Pojo
I'm new to NHibernate, and my problem was that I used a different session to query my object than I did to save it. So the saving session didn't know about the object.
It seems obvious, but from reading the previous answers I was looking everywhere for 2 objects, not 2 sessions.
#GeneratedValue(strategy=GenerationType.IDENTITY), adding this annotation to the primary key property in your entity bean should solve this issue.
I resolved this problem .
Actually this is happening because we forgot implementation of Generator Type of PK property in the bean class. So make it any type like as
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private int id;
when we persist the objects of bean ,every object acquired same ID ,so first object is saved ,when another object to be persist then HIB FW through this type of Exception: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session.
The problem happens because in same hibernate session you are trying to save two objects with same identifier.There are two solutions:-
This is happening because you have not configured your mapping.xml file correctly for id fields as below:-
<id name="id">
<column name="id" sql-type="bigint" not-null="true"/>
<generator class="hibernateGeneratorClass"</generator>
</id>
Overload the getsession method to accept a Parameter like isSessionClear,
and clear the session before returning the current session like below
public static Session getSession(boolean isSessionClear) {
if (session.isOpen() && isSessionClear) {
session.clear();
return session;
} else if (session.isOpen()) {
return session;
} else {
return sessionFactory.openSession();
}
}
This will cause existing session objects to be cleared and even if hibernate doesn't generate a unique identifier ,assuming you have configured your database properly for a primary key using something like Auto_Increment,it should work for you.
Otherwise than what wbdarby said, it even can happen when an object is fetched by giving the identifier of the object to a HQL. In the case of trying to modify the object fields and save it back into DB(modification could be insert, delete or update) over the same session, this error will appear. Try clearing the hibernate session before saving your modified object or create a brand new session.
Hope i helped ;-)
I have the same error I was replacing my Set with a new one get from Jackson.
To solve this I keep the existing set, I remove from the old set the element unknown into the new list with retainAll.
Then I add the new ones with addAll.
this.oldSet.retainAll(newSet);
this.oldSet.addAll(newSet);
No need to have the Session and manipulate it.
Try this. The below worked for me!
In the hbm.xml file
We need to set the dynamic-update attribute of class tag to true:
<class dynamic-update="true">
Set the class attribute of the generator tag under unique column to identity:
<generator class="identity">
Note: Set the unique column to identity rather than assigned.
I just had the same problem .I solve it by adding this line:
#GeneratedValue(strategy=GenerationType.IDENTITY)
Another thing that worked for me was to make the instance variable Long in place of long
I had my primary key variable long id;
changing it to Long id; worked
All the best
You always can do a session flush.
Flush will synchronize the state of all your objects in session (please, someone correct me if i'm wrong), and maybe it would solve your problem in some cases.
Implementing your own equals and hashcode may help you too.
You can check your Cascade Settings. The Cascade settings on your models could be causing this. I removed Cascade Settings (Essentially not allowing Cascade Inserts/Updates) and this solved my problem
I found this error as well. What worked for me is to make sure that the primary key (that is auto-generated) is not a PDT (i.e. long, int, ect.), but an object (i.e. Long, Integer, etc.)
When you create your object to save it, make sure you pass null and not 0.
I am in the process of moving an existing Google AppEngine application from the master-slave datastore (MSD) to the new high-replication datastore (HRD).
The application is written in Java, using Objectify 3.1 for persistence.
In my old (MSD) application, I have an entity like:
public class Session {
#Id public Long id;
public Key<Member> member;
/* other properties and methods */
}
In the new (HRD) application, I have changed this into:
public class Session {
#Id public Long id;
// HRD: #Parent is needed to ensure strongly consistent queries.
#Parent public Key<Member> member;
/* other properties and methods */
}
I need the Session objects to be strongly consistent with their parent Member object.
When I migrate (a working copy of) my application using Google's HRD migration tool, all Members and Sessions are there. However, all member properties of Session objects become null. Apparently, these properties are not migrated.
I was prepared to re-parent my Session objects, but if the member property is null, that is impossible. Can anyone explain what I am doing wrong, and if this problem can be solved?
#Id and #Parent are not "real" properties in the underlying entity. They are part of the key which defines the entity; Objectify maps them to properties on your POJO.
The transformation you are trying to make is one of the more complicated problems in GAE. Remember that an entity with a different parent (say, some value vs null) is a different entity; it has a different key. For example, loading an entity with a null parent, setting the parent to a value, and saving the entity, does not change the entity -- it creates a new one. You would still need to delete the old entity and update any foreign key references.
Your best bet is to import the data as-is with the regular 'member' field. You can also have the #Parent field (call it anything; you can rename it at any time since it's not a "real" property). After you migrate, make a pass through your data:
Load each Session
Check for null parentMember. If null:
Assign parentMember and save entity
Delete entity with null parentMember
Be very careful of foreign key references if you do this.
I have two objects:
public class ParentObject {
// some basic bean info
}
public class ChildObject extends ParentObject {
// more bean info
}
Each of these tables corresponds to a differnet table in a database. I am using Hibernate to query the ChildObject, which will in turn populate the parent objects values.
I have defined my mapping file as so:
<hibernate-mapping>
<class name="ParentObject"
table="PARENT_OBJECT">
<id name="id"
column="parent"id">
<generator class="assigned"/>
</id>
<property name="beaninfo"/>
<!-- more properties -->
<joined-subclass name="ChildObject" table="CHILD_OBJECT">
<key column="CHILD_ID"/>
<!--properties again-->
</joined-subclass>
</class>
</hibernate-mapping>
I can use hibernate to query the two tables without issue.
I use
session.createQuery("from ChildObject as child ");
This is all basic hibernate stuff. However, the part which I am having issues with is that I need to apply locks to the all the tables in the query.
I can set the lock type for the child object by using the query.setLockType("child", LockMode.?). However, I cannot seem to find a way to place a lock on the parent table.
I am new to Hibernate, and am still working around a few mental roadblocks. The question is: how can I place a lock on the parent table?
I was wondering if there was a way around having to do this without undoing the Polymorphic structure that I have set up.
Why do you have to lock both tables? I'm asking because depending on what you're trying to do there may be alternative solutions to achieve what you want.
The way things are, Hibernate normally only locks the root table unless you're using some exotic database / dialect. So, chances are you're already locking your ParentObject table rather than ChildObject.
Update (based on comment):
Since you are using an exotic database :-) which doesn't support FOR UPDATE syntax, Hibernate is locking the "primary" tables as they are specified in query ("primary" in this case being table mapped for the entity listed in FROM clause, not the root of the hierarchy - e.g. ChildObject, not ParentObject). Since you want to lock both tables, I'd suggest you try one of the following:
Call session.lock() on entities after you've obtained them from the query. This should lock the root table of the hierarchy, however I'm not 100% sure on whether it'll work because technically you're trying to "upgrade" the lock that's already being held on a given entity.
Try to cheat by explicitly naming ParentObject table in your query and requesting lock mode for it:
String hql = "select c from ChildObject c, ParentObject p where c.id = p.id";
session.createQuery(hql)
.setLockMode("c", LockMode.READ)
.setLockMode("p", LockMode.READ).list();
I am having an issue with a delete I am trying to do in Hibernate. Everytime I try to delete I get an issue due to child records existing so it cannot delete the parent. I want to delete children and the parent. Here is my parent mapping:
<set name="communicationCountries" inverse="true" cascade="all,delete-orphan">
<key column="COM_ID" not-null="true" on-delete="cascade" />
<one-to-many class="com.fmr.fc.portlet.communications.vo.CommunicationCountry"/>
</set>
Here is the mapping for the child class:
<many-to-one name="communication" column="COM_ID" not-null="true" class="com.fmr.fc.portlet.communications.vo.Communication" cascade="all"/>
EDIT - When I do an insert the data is inserted into the parent and the child.
When I do an update using a new object with the ID of the object I want to modify the parent is updated but any existing children are added a second time. I cannot seem to remove children. When I retrieve an object using the ID and modify this I get an error telling me org.hibernate.LazyInitializationException: could not initialize proxy - the owning Session was closed. I suspect this is because in one getHibernateTemplate() call I am getting the object and I am saving it in another one and these are two different sessions?
When I do a delete I get an error because children exist. I know I am just doing something completley stupid due to lack of having a clue as to how this all works.
Here are my update and delete methods, in this case the update/save is retrieving and modifying before saving. The delete is using a new object with the same ID as the one in the DB I want to delete:
public void deleteCommunication(Communication comm) throws DataAccessException
{
getHibernateTemplate().delete(comm);
}
public void saveCommunication(Communication comm) throws DataAccessException
{
Communication existing = (Communication)getHibernateTemplate().load(Communication.class, comm.getComId());
existing.getCommunicationCountries().clear();
getHibernateTemplate().saveOrUpdate(existing);
}
UPDATE
So here are my new methods but still no joy. I think my issue has to do with the children not being loaded/initialized etc. With the delete though, I cant understand why the cascading delete isn't happening.
thanks so much for your help so far. I have reached my deadline for this work already so if I don't get it fixed over the weekend I am just going to have to resort to executing HQL queries as I know that will work for me :(
public void deleteCommunication(Integer id) throws DataAccessException
{
HibernateTemplate hibernate = getHibernateTemplate();
Communication existing = (Communication)hibernate.get(Communication.class, id);
hibernate.initialize( existing.getCommunicationCountries());
hibernate.delete(existing);
}
public void updateCommunication(Communication comm) throws DataAccessException
{
HibernateTemplate hibernate = getHibernateTemplate();
Communication existing = (Communication)hibernate.get(Communication.class, comm.getComId());
hibernate.initialize( existing.getCommunicationCountries());
existing.getCommunicationCountries().clear();
hibernate.saveOrUpdate(existing);
}
In no particular order:
A) Assuming "myID" in your code is your entity's identifier, you should be using session.get() instead of criteria - it's faster and most definitely easier:
MyObject obj = (MyObject) session.get(MyObject.class, new Long(1));
B) If you are using Spring (judging by getHibernateTemplate() call), you should use it consistently :-) and not resort to calling session directly unless absolutely necessary - and it's pretty much never necessary. The above get method would therefore become:
MyObject obj = (MyObject) getHibernateTemplate().get(MyObject.class, new Long(1));
If you need to write a criteria-based query you can use DetachedCriteria and HibernateTemplate.getByCriteria() method:
DetachedCriteria crit = DetachedCriteria.forClass(MyObject.class)
.add(Property.forName("myId").eq( new Long(1) ) );
List results = getHibernateTemplate().findByCriteria(crit);
C) You normally should not evict() object from session (doing it immediately before closing is rather pointless anyway). Nor should you normally close() the session you've obtained from HibernateTemplate.
D) Finally, as far as automatically saving children (one-to-many collection elements) goes - take a look at this example which provides a good explanation of different cascade settings. Post your mappings / code if you're still having problems.
Update (based on question clarifications):
1) Your mapping looks OK except for cascade on parent in child class (<many-to-one name="communication" cascade="all"/>). You most likely do not want this.
2) LazyInitializationException is thrown because Hibernate by default maps collections as lazy, meaning that children (communicationCountries) will not be loaded until first access. If that access happens when the session is already closed, exception is thrown. You can ensure that collection is populated by calling Hibernate.initialize() on collection.
3) Your delete() should work fine AS LONG AS you're calling it on entity instance returned by Hibernate rather than one you've created yourself (say, unmarshalled from remote call) for which "communicationCountries" collection is not populated. In order for Hibernate to delete children it needs to know they exist.
4) Your update(), on the other hand, is wrong. You're loading an entity, clearing its children, and saving it again - which is fine per se - but that has no connection to the parameter being passed in.