Is there any way to force the persistence order of objects in JPA 2 w/Hibernate?
Say I have three classes: Parent, Child, and Desk. Parent owns collections of Child and Desk via #OneToMany; a Child can have one Desk. Furthermore, Parent defines transitive persistence on both collections, but Child does not define transitive persistence on its Desk:
class Parent {
#OneToMany(cascade=CascadeType.ALL) Collection<Child> children;
#OneToMany(cascade=CascadeType.ALL) Collection<Desk> desks;
...
}
class Child {
#OneToOne(cascade={}) Desk desk;
#ManyToOne Parent parent;
}
class Desk {
#ManyToOne Parent parent;
}
Ideally, I'd like to create a Desk and a Child at the same time and persist the relationships:
Parent parent = em.find(...);
Child child = new Child();
Desk desk = new Desk();
// add both desk and child to parent collections here
// set Parent attribute on both desk and child
If I execute the above code in a transaction, Hibernate cascades from Parent to its new Child and attempts to persist the new Child object. Unfortunately, this results in an "object references an unsaved transient instance" error, because the cascade from Parent to Desk hasn't resulted in the new Desk object being persisted yet.
I know I can fix the problem with an EntityManager flush() operation (em.flush()) - create the Child, create the Desk, attach both to Parent, em.flush(), then attach the Desk to Child, but I'm not super-keen on littering my code with em.flush() to save complex graphs of new persistent objects. Is there a different way to signal to JPA 2 or Hibernate that it should always persist the new Desk first instead of the new Child?
Looking at your description, I think that the Persistence system tries to persist first in this order:
First the Parent.children[i]
Each Children[i] has a transient pointer to Desk. The system fails to persist it because you have not configured it as Cascade.Persist.
Then it fails when persisting Desk, and you think that it fails in the path Parent.desks[i] (which is configured as Cascade) but maybe the fail doesn't come from this path.
Related
I'm using spring hibernate and JPA to model a system where there are four levels in the database hierarchy, each with one to many relationships meaning: GreatGrandParent (GGP) has a one to many with the GrandParent (GP), the GP has a one to many with the Parent (P) and P has a one to many with the Child (C).
I want to be able to persist/update a GP/P/C entity and have it cascade up and down the tree. I also need to be able to retrieve the lower level entities by their parents.
The way I currently have it implemented is with a repository for each of the classes. When I want to persist a GP my code looks roughly like this
public ParentEntity persistParent( final UUID GrandParentId,
final Parent parent )
{
final GrandParent grandParentEntity = GPRepository.findById( GrandParentId );
parent.setGrandParent( grandParent );
return parentRepository.save( parent );
}
and when I want to retrieve a child entity it looks like this
public ParentEntity findParentByGrandParent( final GrandParent grandParent,
final Parent parent )
{
return parentRepository.findParentByGrandParent( grandParent );
}
This approach works fine. However, some have commented that I should just have one top level GGPRepository, to handle all of my persist/update operations, and then use EntityManager to make a custom SQL query to retrieve sub entities.
Is one approach better than the other? Why so?
Thanks in advance
I have the following hierarchy in my classes
#Entity
#Table(name = "Parent")
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
public class Parent {
}
#Entity
#DiscriminatorVlaue("FirstChild")
public class FirstChild extends Parent {
}
#Entity
#DiscriminatorVlaue("SecondChild")
public class SecondChild extends Parent {
}
This creates a table Parent as expected.
Some business logic in my app:
As the request is accepted , it is persisted as "Parent" type and creates the following entry in the table
Dtype id value
Parent 1 "some value"
As the request is processed it could either be of type FirstChild or of type SecondChild so in my code somewhere I have
if (some condition is met) {
// change the Parent to its firstChild type
}
else {
// chnage it to "SecondChild" type
}
Is my understanding and usage of inheritance correct? I am essentially downcasting objects in the if loop which throw runtime exceptions, but does changing the type of an object also change the DTYPE value in the database? I am essentially using inheritance to keep things organized as their types. Is there some magic I can do in the constructors to achieve this? But the real question is is the Dtype modifiable on an update?
To work with this further,
I created contsructors in the child classes as
public FirstChild(Parent parent) {
id = parent.id
}
but this creates a new row for the sub types ,
Dtype id value
Parent 1 "some value"
FirstChild 2 "some value"
I am making sure that the "firstChild" record is created with the parent ID (1) but I am not sure why a second row is created
There is not a real inheritance in relational databases. Hibernate is just simulating it. Inheritance is a programming concept to use other classes like an owner. But you can't use other table as owning table.. In an child table if you have an element than you have a parent but where is the parent ? Parent is an element too.. ( a row of data ). So you simulate it in databases.
Hibernate has some inheritance strategies.'SINGLE_TABLE' is one of them and collects parent table and child tables in 1 table. To differantiate which type is which class uses 'DiscriminatorColumn' on database. This column cant be changed it is static and declared with your java class. You can use 'DiscriminatorValue' to select a value to identify a java class at 'DiscriminatorColumn'.
So your inheritance understanding is differs on relational databases. You must declare every child row data on parent table rows...
But the real question is is the Dtype modifiable on an update?
No, you cant: source.
From your example your pure java code would look like this:
Parent parent = new Parent(); //object received from request, stored in database as it is
if (some conditions is met) {
// FirstChild firstChild = (Parent) parent; //this would throw ClassCastException
FirstChild firstChild = new FirstChild(parent); //creating new object, gc will collect parent
}
So hibernate act same way as java: you cannot update Parent, you have to create new record for firstChild and delete parent.
Wouldn't be helpfull for you use #DiscriminatorFormula instead of checking some condition is met when processing the request?
#DiscriminatorFormula("case when value = 'some value' then 'FirstChild' else 'SecondChild' end")
Then you wouldnt have to save Parent and then process its records.
These are my findings:
There is no objective reason why a Child should extend Parent. In terms of modeling I would have instead one abstract class lets call it FamilyMember and then Parent, FirstBorn and SecondBorn would extend from the family member.
The downcasting problem would be solved by (1)
Hibernate does not support update of discriminator value. This mean that if a child comes to age and is no longer a child , but a parent. The child record needs to be deleted and a new record corresponding to a Parent be created. This is how it works. A child is a child until it is no longer a child and then the child record gets deleted and a new one not child gets created.
I'm facing a problem with EntityManager.merge() where the merge is cascaded to other entities that have already been deleted from the database. Say I have the following entities:
#Entity
public class Parent {
#OneToMany(cascade = CascadeType.ALL, orphanremoval = true, mappedBy = "parent")
private List<Child> children;
public void clearChildren() { children.clear(); }
public void createChildren(Template template) { ... }
}
#Entity
public class Child {
#ManyToOne
#JoinColumn(name = "parentId")
private Parent parent;
}
The situation where the problem occurs is the following:
The user creates a new Parent instance, and creates new Child instances based on a template of their choosing by calling the createChildren() method. The template defines the amount and properties of the created children.
The user saves the parent, which cascades the persist to the children.
The user notices that the used template was wrong. He changes the template and saves, which results in deletion of the old children and the creation of new ones.
Commonly the deletion of the old children would be handled automatically by the orphanRemoval property, but the Child entity has a multi-column unique index, and some of the new children created based on the new template can have identical values in all columns of the index as some of the original children. When the changes are flushed to the database, JPA performs inserts and updates before deletions (or at least Hibernate does), and a constraint violation occurs. Oracle's deferred constraints would solve this, but we also support MS SQL, which AFAIK doesn't support deferred constraints (correct me if I'm wrong).
So in order to solve this, I manually delete the old children, flush the changes, create the new children, and save my changes. The artificial code snippet below shows the essential parts of what's happening now. Due to the way our framework works, the entities passed to this method are always in a detached state (which I'm afraid is a part of the problem).
public void createNewChildren(Parent parent, Template template) {
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
entityManager.remove(entityManager.find(Child.class, child.getId()));
}
entityManager.flush();
parent.clearChildren();
parent.createChildren(template);
entityManager.merge(parent); // EntityNotFoundException is thrown
}
The last line throws an exception as the EntityManager attempts to load the old children and merge them as well, but fails since they're already deleted. The question is, why does it try to load them in the first place? And more importantly, how can I prevent it? The only thing that comes to my mind that could cause this is a stale cache issue. I can't refresh the parent as it can contain other unsaved changes and those would be lost (plus it's detached). I tried setting the parent reference explicitly to null for each child before deleting them, and I tried to evict the old children from the 2nd level cache after deleting them. Neither helped. We haven't modified the JPA cache settings in any way.
We're using Hibernate 4.3.5.
UPDATE:
We are in fact clearing the children from the parent as well, this was maybe a bit ambiguous originally so I updated the code snippets to make it clear.
Try removing the children from parent before deleting them, that way MERGE can't be cascaded to them because they are not in the parent's collection.
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
Child c = entityManager.find(Child.class, child.getId());
parent.getChildren().remove(c); // ensure that the child is actually removed
entityManager.remove(c);
}
UPDATE
I still think the order of operations is the cause of the problems here, try if this works
public void createNewChildren(Parent parent, Template template) {
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
Child c = entityManager.find(Child.class, child.getId());
parent.getChildren().remove(c); // ensure that the child is actually removed
c.setParent(null);
entityManager.remove(c);
}
parent.createChildren(template);
entityManager.merge(parent);
}
I have the following entities with a parent-child relationship:
public class Parent {
#Id #GeneratedValue String id;
#Version Long version;
#OneToMany(mappedBy = "parent", orphanRemoval = true)
#Cascade({CascadeType.ALL})
Set<Child> children;
// getters and setters
}
public class Child {
#Id #GeneratedValue String id;
#ManyToOne
#JoinColumn("parent_id")
Parent parent;
// getters and setters
}
I retrieve a Parent for edit on the web UI by copy properties to a ParentDto, which has a list of ChildDtos.
Once I'm done editing, I send the ParentDto object back and copy all properties into a new Parent object (parent) with a new HashSet to store the Children created from the list of ChildDtos.
Then I call getCurrentSession().update(parent);
The problem
I can add children, update children, but I can't delete children. What is the issue here and how do I resolve it?
Thanks in advance.
You have a bidirectional association, you need to remove from Child class the link to the parent class, try to make Parent reference to null, and also set the Set<Child> to a new HashSet<Child> or whatever your implementation is.
Then save the changes that will remove the children form the table.
This action can only be used in the context of an active transaction.
public void remove(Object entity);
Transitions managed instances to removed. The instances will be deleted from the datastore on the next flush or commit. Accessing a removed entity has undefined results.
For a given entity A, the remove method behaves as follows:
If A is a new entity, it is ignored. However, the remove operation cascades as defined below.
If A is an existing managed entity, it becomes removed.
If A is a removed entity, it is ignored.
If A is a detached entity, an IllegalArgumentException is thrown.
The remove operation recurses on all relation fields of A whose cascades include CascadeType.REMOVE. Read more about entity lifecycle
consider this scenario:
I have loaded a Parent entity through hibernate
Parent contains a collection of Children which is large and lazy loaded
The hibernate session is closed after this initial load while the user views the Parent data
The user may choose to view the contents of the lazy Children collection
I now wish to load that collection
What are the ways / best way of loading this collection?
Assume session-in-view is not an option as the fetching of the Children collection would only happen after the user has viewed the Parent and decided to view the Children.
This is a service which will be accessed remotely by web and desktop based client.
Thanks.
The lazy collection can be loaded by using Hibernate.initialize(parent.getCollection()) except that the parent object needs to be attached to an active session.
This solution takes the parent Entity and the name of the lazy-loaded field and returns the Entity with the collection fully loaded.
Unfortunately, as the parent needs to be reattached to the newly opened session, I can't use a reference to the lazy collection as this would reference the detached version of the Entity; hence the fieldName and the reflection. For the same reason, this has to return the attached parent Entity.
So in the OP scenario, this call can be made when the user chooses to view the lazy collection:
Parent parentWithChildren = dao.initialize(parent,"lazyCollectionName");
The Method:
public Entity initialize(Entity detachedParent,String fieldName) {
// ...open a hibernate session...
// reattaches parent to session
Entity reattachedParent = (Entity) session.merge(detachedParent);
// get the field from the entity and initialize it
Field fieldToInitialize = detachedParent.getClass().getDeclaredField(fieldName);
fieldToInitialize.setAccessible(true);
Object objectToInitialize = fieldToInitialize.get(reattachedParent);
Hibernate.initialize(objectToInitialize);
return reattachedParent;
}
I'm making some assumptions about what the user is looking at, but it seems like you only want to retrieve the children if the user has already viewed the parent and really wants to see the children.
Why not try opening a new session and fetching the children by their parent? Something along the lines of ...
criteria = session.createCriteria(Child.class);
criteria.add(Restrictions.eq("parent", parent));
List<Child> children = criteria.list();
Hibernate handles collections in a different way that normal fields.
At my work we get around this by just initialising the fields in the initial load that we need on a case by case basis. For example, in a facade load method that is surrounded by a transaction you might have:
public Parent loadParentWithIntent1(Long parentId)
{
Parent parent = loadParentFromDAO();
for (Child c : parent.getChildren())
{
c.getField1();
}
}
and we have a different facade call for each intent. This essentially achieves what you need because you'd be loading these specific fields when you need them any way and this just puts them in the session at load time.