cascade = CascadeType.ALL what to expect? - java

I'm wondering what to expect when I use cascade = CascadeType.ALL as such,
#OneToMany(
mappedBy = "employeeProfile",
cascade = CascadeType.ALL,
orphanRemoval = true)
private List<ProfileEffortAllocation> effortAllocations;
public List<ProfileEffortAllocation> getEffortAllocations() {
if (effortAllocations == null) {
effortAllocations = new ArrayList<>();
}
return effortAllocations;
}
public void setEffortAllocations(List<ProfileEffortAllocation> effortAllocations) {
this.effortAllocations = effortAllocations;
}
I'm finding when I add a new effortAllocation and attempt to save object, but have a validation failure preventing my code from ever reaching session.saveOrUpdate(parentObj), I'm still getting a pk rather than null as if persist is being called on the child OneToMany. Should my parent object call session.saveOrUpdate(parentObj); before I ever see a pk from effortAllocation?
I'd like to point out that the parent object is an existing object and has been loaded from the database with a pk prior to adding a new child record.

When you use CascadeType.ALL, whenever you do any operation on the parent all those operations would also get cascaded to the child.
Yes you should call saveOrUpdate(parent)
In your case as the parent objects are already existing. You could load the existing parent and create a new child and attach the child to parent and when you call saveOrUpdate(parent), it should update the parent and create all those child and relate it to that parent.
Yes it is generating a id for child, because it is trying to create a child due to cascade all and you could have configured it to generate id in #Id.
Enable sql logs using hibernate.show_sql to understand better whats happening.
I assume you would have a #JoinColumn in your child which would map to the parent primary key.

The cause of this issue was do to a lookup query triggering a flush prior to returning it's results. The solution was to set this.session.setFlushMode(FlushMode.COMMIT);
Hibernate tries to ensure that database conents is up-to-date before making any queries.
https://forum.hibernate.org/viewtopic.php?p=2316849

Related

How to prevent hibernate from creating a proxy

I have a tricky problem with hibernate using more queries than necessary for a simple findAll call. In my model there is two entities Parent and Child with oneToMany association;
Parent
class Parent{
#id
private long id;
//unique
private String code;
#OneToMany(mappedBy = "parent", cascade = CascadeType.ALL)
private List<OperatorAttribute> children;
}
Child
class Child{
#id
private long id;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "parent_code", referencedColumnName = "code")
#LazyToOne(LazyToOneOption.NO_PROXY) // here i'm trying to tell hibernate to create no proxy and just ignore the field but no luck :/
public Parent parent;
}
The problem is that whenever I try to fetch the list of child using childRepository.findAll() from the database, hibernate make N+1 select query, why ?
I think this may be the explanation for that: IMHO when Hibernate populate the child object, he tries to create a proxy for the parent field,
and for that he needs the id of the parent row, which should normally be the foreign key in the child table, but in my case the #fk isn't binded to the primary key of the Parent table but to a unique column (plz don't ask me why) so in order to populate the id he needs to do an additional select query just to initialize the proxy of the parent field.
So my question is how to prevent Hibernate from creating a proxy for the parent field.
Thanks.
You are right. The proxy needs the #Id of the proxied entity (this way it could be made sure that it could be find). As soon you define the LazyToOneOption.NO_PROXY it tells the system to give back the real object. And this is what happens here. What you get mapped on the result is not a proxy, because with this annotation you explicitly disabled it so you have to get the real object.
Based on the mapping provided you cannot ignore the field because You'll loose the information what was is the Parent on the Child. So with this kind of setup you'll always need to read the parent.
If this field is not needed at all in a specific area, you can create some other mappings to the same table. But be careful! This could introduce a load of other cache related problems.

Avoid Instantiation of IndirectList in Eclipselink

I have a simple OneToMany Relation between a Parent and a Child.
Parent:
#OneToMany(mappedBy = "parent", orphanRemoval = true, cascade = CascadeType.ALL)
private List<Child> children = new ArrayList<>();
Child:
#ManyToOne(optional = false)
#JoinColumn(name = "PARENT_ID", nullable = false)
private Parent parent;
Because a parent can have a big amount of children I wanted to take advantage of Lazy Instantiation of Indirect Collections:
IndirectList and IndirectSet can be configured not to instantiate the list from the database when you add and remove from them. IndirectList defaults to this behavior. When Set to true, the collection associated with this TransparentIndirection will be setup so as not to instantiate for adds and removes. The weakness of this setting for an IndirectSet is that when the set is not instantiated, if a duplicate element is added, it will not be detected until commit time.
As the default FetchType of OneToMany is LAZY and I am using a List for my Collection, loading a parent from the database causes an IndirectList to be used for the relation. As soon as I add another child to that parent I can see that a select query for the children of that parent is executed.
How can I change that?
I am using Eclipselink 2.6.4 (org.eclipse.persistence:eclipselink:2.6.4).
I also tried to use a DescriptorCustomizer to call org.eclipse.persistence.mappings.CollectionMapping.setUseLazyInstantiationForIndirectCollection(Boolean) on my relation, but this seemed to have absolutely no effect.
After debugging into the Method org.eclipse.persistence.indirection.IndirectList.add(E), I was able to see that the Method call to org.eclipse.persistence.indirection.IndirectList.shouldAvoidInstantiation() at line 206 returned false, because org.eclipse.persistence.indirection.IndirectList._persistence_getPropertyChangeListener() at line 1007 returns null and null is not instanceof AttributeChangeListener. Because of this the relation is then instantiated by org.eclipse.persistence.indirection.IndirectList.getDelegate() in line 216.
To me this seems like a bug, but I don't know enough about this implementation to be sure.
Change tracking is required to support not instantiating lazy collections when making modifications. Change tracking is enabled when using weaving as described here: https://www.eclipse.org/eclipselink/documentation/2.5/concepts/app_dev007.htm

Hibernate: mapping parent to child entity with 2 references to one column in child?

I am trying to have a cascade delete within my entities. But I think that it is being stopped by the fact that I have 2 references to the one column in my child entity.
In my child Dog entity I originally had the following field:
#Column(name = "KENNEL_ID", insertable = false, updatable = false)
private String kennelId;
I then added this because I wanted to get a list of all child entities related to the parent:
#ManyToOne
#JoinColumn(name = "KENNEL_ID" )
private Kennel kennel;
In my parent Kennel entity I also added this to refer to the field in the child I added:
#OneToMany(mappedBy = "kennel",cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.LAZY)
private List<Dog> dogList= new ArrayList<Dog>();
Before I added the 2nd child reference and the parent references, cascade delete worked for all of my entities. However since I have added them it does not.
How can I fix this?
It is not a problem of mapping parent and child to the same class.The problem is that you need to maintain both ends of the bi-directional-relationship by hand.
child.setParent(parent)
parent.addChild(child)
BTW: Setting it only on one side (the one which is responsible to store the relationship in the database), store and reload the entity will work in some cases too. (And you will find this dirty trick in many old tutorials). But in my opinion it is bad practice. (In your test case, it would require to clean the cache before you reload the parent after the child is saved.)
public void setDogList(List<Dog> dogList) {
this.dogList.clear();
this.dogList.addAll(dogList);
}

JPA EntityManager.merge() attemps to cascade the update to deleted entities

I'm facing a problem with EntityManager.merge() where the merge is cascaded to other entities that have already been deleted from the database. Say I have the following entities:
#Entity
public class Parent {
#OneToMany(cascade = CascadeType.ALL, orphanremoval = true, mappedBy = "parent")
private List<Child> children;
public void clearChildren() { children.clear(); }
public void createChildren(Template template) { ... }
}
#Entity
public class Child {
#ManyToOne
#JoinColumn(name = "parentId")
private Parent parent;
}
The situation where the problem occurs is the following:
The user creates a new Parent instance, and creates new Child instances based on a template of their choosing by calling the createChildren() method. The template defines the amount and properties of the created children.
The user saves the parent, which cascades the persist to the children.
The user notices that the used template was wrong. He changes the template and saves, which results in deletion of the old children and the creation of new ones.
Commonly the deletion of the old children would be handled automatically by the orphanRemoval property, but the Child entity has a multi-column unique index, and some of the new children created based on the new template can have identical values in all columns of the index as some of the original children. When the changes are flushed to the database, JPA performs inserts and updates before deletions (or at least Hibernate does), and a constraint violation occurs. Oracle's deferred constraints would solve this, but we also support MS SQL, which AFAIK doesn't support deferred constraints (correct me if I'm wrong).
So in order to solve this, I manually delete the old children, flush the changes, create the new children, and save my changes. The artificial code snippet below shows the essential parts of what's happening now. Due to the way our framework works, the entities passed to this method are always in a detached state (which I'm afraid is a part of the problem).
public void createNewChildren(Parent parent, Template template) {
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
entityManager.remove(entityManager.find(Child.class, child.getId()));
}
entityManager.flush();
parent.clearChildren();
parent.createChildren(template);
entityManager.merge(parent); // EntityNotFoundException is thrown
}
The last line throws an exception as the EntityManager attempts to load the old children and merge them as well, but fails since they're already deleted. The question is, why does it try to load them in the first place? And more importantly, how can I prevent it? The only thing that comes to my mind that could cause this is a stale cache issue. I can't refresh the parent as it can contain other unsaved changes and those would be lost (plus it's detached). I tried setting the parent reference explicitly to null for each child before deleting them, and I tried to evict the old children from the 2nd level cache after deleting them. Neither helped. We haven't modified the JPA cache settings in any way.
We're using Hibernate 4.3.5.
UPDATE:
We are in fact clearing the children from the parent as well, this was maybe a bit ambiguous originally so I updated the code snippets to make it clear.
Try removing the children from parent before deleting them, that way MERGE can't be cascaded to them because they are not in the parent's collection.
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
Child c = entityManager.find(Child.class, child.getId());
parent.getChildren().remove(c); // ensure that the child is actually removed
entityManager.remove(c);
}
UPDATE
I still think the order of operations is the cause of the problems here, try if this works
public void createNewChildren(Parent parent, Template template) {
for (Child child : parent.getChildren()) {
// Have to run a find since the entities are detached
Child c = entityManager.find(Child.class, child.getId());
parent.getChildren().remove(c); // ensure that the child is actually removed
c.setParent(null);
entityManager.remove(c);
}
parent.createChildren(template);
entityManager.merge(parent);
}

How to ignore unique violation during insert list of objects which contain set of object

I use PostgreSQL nad Spring data JPA with Hibernate. I have relation OneToMany with orphanRemoval = false because I very often add many childs to relation.
Parent:
#OneToMany(mappedBy = "parent", cascade = { CascadeType.ALL }, orphanRemoval = false, fetch = FetchType.LAZY)
public Set getChildren() {
return children;
}
Child:
#ManyToOne
#JoinColumn(name = "parent_id")
public Parent getParent() {
return parent;
}
To persist or merge object I use method
Iterable< T > save(Iterable< extends T> entities)
form CrudRepository. I save list of parents, where every parent contain set of child. Child table has unique constraint. If constraint violations occurs I want to ignore that and ommit (do not persist) child which cases viloations but I want insert every child which doesn't case constraint violation. How to do that?
Handle this dirty by Exceptions.
Try to Update the Database, if fine break here.
Catch the UniqueViolationException and find the JDBCException. Upcast to your qualified Database Exception and find the broken Children.
Remove the Children from the Parent.
Go to 1.
The clean way is to filter the Entitys who will produce unique-violation-exceptions.
After you filtered that entitys, you can save the good ones.
Exceptions should used as they realy are: Exceptions.
This is a Postgres-specific answer and not very kosher, but you could employ a similar approach with different DBs:
take the advantage of on conflict clause in native Postgres insert statement. That way you won't have to handle the unique constraint exceptions at all, the DB will solve the conflicts for you as it encounters them.
Prepare a native SQL insert statement e.g.
insert into child (id, parent_id, data) values (:id, :parent_id, :data) on conflict do nothing
Use the previously written statement with javax.persistence.EntityManager#createNativeQuery(java.lang.String)
EntityManager.createNativeQuery(sql)
.setParameter("id", it.id)
.setParameter("parent_id", parentId)
.setParameter("data", it.data)
.executeUpdate()
Repeat for all the children

Categories