How solve javax.persistence.EntityNotFoundException with JPA (not by using #NotFound) - java

We are using JPA to load some stuff from a database. Some entities may have optional relationships between them, e.g.
#Entity
public class First {
....
#OneToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE, CascadeType.REFRESH, CascadeType.DETACH})
#JoinColumns(value = {
JoinColumn(name = "A_ID", referencedColumnName = "A_ID", insertable = false, updatable = false),
JoinColumn(name = "B_ID", referencedColumnName = "B_ID", insertable = false, updatable = false)})
private Second second;
When this association is present in the database, everything is working fine. When it's not, I'm getting a javax.persistence.EntityNotFoundException
What I want is instead of the exception to have this field as NULL if the association is not present.
I have tried several different things, e.g. using the optional=true in the relationship annotation (which btw is the default option), putting it as Nullable, etc. Nothing seems to do the trick, it seems like all these options are being ignored.
I found a lot of links mentioning this very same problem (and some questions here in stackoverflow) but in all of them the suggestion is to use the #NotFound annotation from Hibernate. But we do NOT want to have any dependencies to Hibernate (we want to keep everything pure JPA).
Does any of you guys know any other way to solve this?
Many thanks for all your help!

Below is an alternative solution for this problem. I had to build on top of an old database where the relations sometimes were corrupt. This is how I solved it by only using JPA.
#PostLoad
public void postLoad(){
try {
if(getObject() != null && getObject().getId() == 0){
setObject(null);
}
}
catch (EntityNotFoundException e){
setObject(null);
}
}

I've met the same problem. It's not always reproducable, so I cannot test it, but here are some thoughts:
The instance of your Second class is deleted, while the instance of the First class does not know anything about that.
You need a way to let an instance of First know, when its instance of Second is deleted.
cascade option for removing does not help here.
You may try to use bidirectional relationship, when instance of First exists inside instance of Second. It let you update instance of
First via instance of Second before removing second
Bidirectional relationship - is evil. I would suppose in your case, that First - is owner of Second. Don't allow any service delete your
Second instance directly. Let service which works with instances of
First remove instance of Second. In this case you may make the
"second" field nullable first, than remove instance of Second via
EntityManager.
You may get exception when execute queries and the 2nd level cache is enabled and query has a hint, which allows to cache its result. I
would offer you get result of queries via the following method:
private List<?> getQueryResult(final Query query)
{
try
{
return query.getResultList();
}
catch (EntityNotFoundException e)
{
return query.setHint("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH).getResultList();
}
}
If you work with entities via EntityManger, but not via queries, and you get exception because entity is cached, you may invalidate all
entities of First in cache when you delete Second.
I'd like to discuss this solution, as I cannot test it and cannot make sute it works. If somebody tries, please let me know.
PS: if somebody has a unit test for hibernate, that reproduces this issue, could you please let me know. I wish to investigate it further.

It happens when you delete the associated entity id. In my case I had Product table depending upon Brand table. I deleted a row or an entity of brand id to which one of the product instance was depending upon.

Try to add the parameter optional = true to your #OneToOne annotation.

What about adding a test in the correspondent entity class:
public boolean getHasSecond() {
if (this.Second != null) {
return true;
} else {
return false;
}
}
Like this, you can check if the relation exists...

Try to use cascade=CascadeType.ALL in #OneToMay(******).

Related

JPA Lazy loading is not working in Spring boot

I googled a lot and It is really bizarre that Spring Boot (latest version) may not have the lazy loading is not working. Below are pieces of my code:
My resource:
public ResponseEntity<Page<AirWaybill>> searchAirWaybill(CriteraDto criteriaDto, #PageableDefault(size = 10) Pageable pageable{
airWaybillService.searchAirWaybill(criteriaDto, pageable);
return ResponseEntity.ok().body(result);
}
My service:
#Service
#Transactional
public class AirWaybillService {
//Methods
public Page<AirWaybill> searchAirWaybill(AirWaybillCriteriaDto searchCriteria, Pageable pageable){
//Construct the specification
return airWaybillRepository.findAll(spec, pageable);
}
}
My Entity:
#Entity
#Table(name = "TRACKING_AIR_WAYBILL")
#JsonIdentityInfo(generator=ObjectIdGenerators.IntSequenceGenerator.class, property="#airWaybillId") //to fix Infinite recursion with LoadedAirWaybill class
public class AirWaybill{
//Some attributes
#NotNull
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "FK_TRACKING_CORPORATE_BRANCH_ID")
private CorporateBranch corporateBranch;
}
And when debugging, I still getting all lazy loaded attributed loaded. See image below.
One of my questions is could Jackson be involved in such behaviour?
Is there any way that I may have missed to activate the lazy loading?
EDIT
Another question, could the debugger be involved in ruining the lazy loading?
EDIT 2:
For specification build, I have :
public static Specification<AirWaybill> isBranchAirWayBill(long id){
return new Specification<AirWaybill>() {
#Override
public Predicate toPredicate(Root<AirWaybill> root, CriteriaQuery<?> query, CriteriaBuilder cb) {
return cb.equal(root.join("corporateBranch",JoinType.LEFT).get("id"),id);
}
};
}
Hibernate Session exists within method with #Transactional.
Passing entity outside Service class is a bad practise because session is being closed after leaving your search method. On the other hand your entity contains lazy initialised collections, which cannot be pulled once session is closed.
The good practise is to map entity onto transport object and return those transport objects from service (not raw entities).
SpringBoot by default has enabled:
spring.jpa.open-in-view = true
That means transaction is always open. Try to disable it.
more information here
Most likely you are debugging while still being inside the service, thus while the transaction is still active and lazy loading can be triggered (any method called on a lazy element triggered the fetch from the database).
The problem is that lazy loading cannot occur while being outside of the transaction. And Jackson is parsing your entity definitely outside the boundaries of one.
You either should fetch all the required dependencies when building your specification or try with the #Transactional on the resource level (but try that as of last resort).
Just so that you know, LAZY fetching strategy is only a hint.. not a mandatory action. Eager is mandatory:
The LAZY strategy is a hint to the persistence provider runtime that
data should be fetched lazily when it is first accessed. The
implementation is permitted to eagerly fetch data for which the LAZY
strategy hint has been specified.
When using a debugger, you are trying to access the value of your variables. So, at the moment you click that little arrow on your screen, the value of the variable in question is (lazily) loaded.
I suppose you are using Hibernate as JPA.
From specification:
The EAGER strategy is a requirement on the persistence provider runtime that data must be eagerly fetched. The LAZY strategy is a hint to the persistence provider runtime that data should be fetched lazily when it is first accessed. The implementation is permitted to eagerly fetch data for which the LAZY strategy hint has been specified. https://docs.jboss.org/hibernate/jpa/2.2/api/javax/persistence/FetchType.html
Hibernate ignores fetch type specially in OneToOne and ManyToOne relationships from non owning side.
There are few options how to force Hibernate use fetch type LAZY if you really need it.
The simplest one is to fake one-to-many relationship. This will work because lazy loading of collection is much easier then lazy loading of single nullable property but generally this solution is very inconvenient if you use complex JPQL/HQL queries.
The other one is to use build time bytecode instrumentation. For more details please read Hibernate documentation: 19.1.7. Using lazy property fetching. Remember that in this case you have to add #LazyToOne(LazyToOneOption.NO_PROXY) annotation to one-to-one relationship to make it lazy. Setting fetch to LAZY is not enough.
The last solution is to use runtime bytecode instrumentation but it will work only for those who use Hibernate as JPA provider in full-blown JEE environment (in such case setting "hibernate.ejb.use_class_enhancer" to true should do the trick: Entity Manager Configuration) or use Hibernate with Spring configured to do runtime weaving (this might be hard to achieve on some older application servers). In this case #LazyToOne(LazyToOneOption.NO_PROXY) annotation is also required.
For more informations look at this:
http://justonjava.blogspot.com/2010/09/lazy-one-to-one-and-one-to-many.html
Just a guess: you are forcing a fetch while building your specification.
I expect something like
static Specification<AirWaybill> buildSpec() {
return (root, query, criteriaBuilder) -> {
Join<AirWaybill, CorporateBranch> br = (Join) root.fetch("corporateBranch");
return criteriaBuilder.equal(br.get("addressType"), 1);
};
}
If this is the case, try changing root.fetch to root.join
The retrieved data already lazy but you are using debug mode its return value when click to watch a data from a debugger.
You can solve this problem with wit 2 steps with jackson-datatype-hibernate:
kotlin example
Add In build.gradle.kts:
implementation("com.fasterxml.jackson.datatype:jackson-datatype-hibernate5:$jacksonHibernate")
Create #Bean
#Bean
fun hibernate5Module(): Module = Hibernate5Module()
Notice that Module is com.fasterxml.jackson.databind.Module, not java.util.Module
Another consideration is while using Lombok, #Data/#Getter annotation causes to load lazy items without need. So be careful when using Lombok.
This was my case.
I think I might have a solution. You can give this a try. This worked for me after 4 hours of hit and trial -
User Entity :
class User {
#Id
String id;
#JsonManagedReference
#OneToMany(mappedBy = "user", fetch = FetchType.LAZY)
private List<Address> addressDetailVOList = new ArrayList<Address>();
}
Address entity :
class Address {
#JsonBackReference
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "userId")
private User user;
}
Your parent class will use #JsonManagedReference, and child class will use #JsonBackReference. With this, you can avoid the infinite loop of entity objects as response and stack overflow error.
I also faced the same issue with Spring data JPA. I added the below annotation & able to get the customer records for a given ORDER ID
Customer to Order : one to Many
Order to customer is lazy load.
Order.java
#ManyToOne(cascade = CascadeType.ALL,targetEntity = CustomerEntity.class,fetch = FetchType.LAZY)
#Fetch(FetchMode. JOIN)
#JoinColumn(name = "CUSTOMER_ID",referencedColumnName = "CUSTOMER_ID",insertable = false,updatable = false)
#LazyToOne(LazyToOneOption.PROXY)
Private CustomerEntity customer
Customer.java
#Entity
#TabLe(name = "CUSTOMER" ,
uniqueConstraints = #UniqueConstraint(columnNames= {"mobile"}))
public class CustomerEntity {
#GeneratedVaLue(strategy = GenerationType.IDENTITY)
#CoLumn(name = "customer_id" )
private Integer customerld;
private String name;
private String address;
private String city;
private String state;
private Integer zipCode;
private Integer mobileNumber;
#OneToMany(mappedBy = " customer" )
#Fetch(FetchMode.JOIN)
#LazyToOne(LazyToOneOption.PROXY)
private List<OrderEntity> orders;
}

hibernate - select/load only field

I want to add a field in a Hibernate table-mapped/entity class.
I want this field to not be mapped to an actual table column, and I want Hibernate not to try to insert/update it to the DB.
But I want to be able to load this field via a custom select in the DAO e.g. via
query.addEntity(getPersistentClass().getName());
The closest I got to this was by making the field #Transient,
but then even the select does not load its value. So this is not
quite what I need.
Is this possible at all and if so how?
Well if i understand what you are trying to do well then i think the solution like this
#Column(name = "{name of column}", updatable = false)
In this way the hibernate will not try to update this column once the object created
Your getter must be a bit smarter.
For exemple you can the HibernateCallback interface from spring like that:
public String getName(Session session) {
return new HibernateCallback<String>() {
#Override
public String doInHibernate(Session session) throws HibernateException {
return session.createSQLQuery("SELECT NAME FROM MY_TABLE WHERE SOME_CONDITIONS").uniqueResult();
}
}.doInHibernate(session);
}
A better way would be to create a kind of execute method in another class where you have access to the session.
With that solution you can still mark your field as #Transient.
You can use
#Column(name = "{name of column}", insertable=false, updatable = false)
Do not mark the field as #Transient.
This way this property will not be inserted or updated but can be used in selects.

Protect entity from cascade delete in Hibernate

Simple question: does anyone have any ideas how to protect some entity from being deleted via CascadeType.ALL in hibernate in runtime (may be with throwing a runtime exception)?
Say, we have some entity:
#Entity
#Table(name = "FOO_ENTITY")
public class FooEntity {
...
}
And I want to protect it from accidental wrong mapping, like:
#Entity
#Table(name = "SOME_OTHER_FOO_ENTITY")
public class SomeOtherFooEntity {
...
#ManyToOne(cascade = CascadeType.ALL)
#JoinColumn(name = "FOO_ENTITY_ID")
private FooEntity fooEntity;
}
So, it should be possible to delete some entity of type FooEntity via session.delete(fooEntityObj), but it must be disabled ability to delete it via cascade removal (session.delete(someOtherFooEntityObj)).
ATTENTION: For those who read my question inattentive or think that I do not understand what am I asking:
1) I can not remove CascadeType.ALL. The question is: who programming avoid and protect from this?
2) Unit tests is not the way, I'm looking for runtime solution.
One of the ways this can be done is to programmatically inspect Hibernate mapping meta-data and to check whether any delete operation (ALL, REMOVE, orphanRemoval) cascades to the protected entity from any other entity; something like:
String protectedEntityName = FooEntity.class.getName();
SessionFactoryImpl sessionFactory = (SessionFactoryImpl) session.getSessionFactory();
for (EntityPersister entityPersister : sessionFactory.getEntityPersisters().values()) {
for (int i = 0; i < entityPersister.getPropertyTypes().length; i++) {
Type type = entityPersister.getPropertyTypes()[i];
EntityType entityType = null;
if (type.isCollectionType()) {
CollectionType collectionType = (CollectionType) type;
Type elementType = sessionFactory.getCollectionPersister(collectionType.getRole()).getElementType();
if (elementType.isEntityType()) {
entityType = (EntityType) elementType;
}
} else if (type.isEntityType()) {
entityType = (EntityType) type;
}
if (entityType != null && entityType.getName().equals(protectedEntityName)) {
if (entityPersister.getPropertyCascadeStyles()[i].doCascade(CascadingAction.DELETE)) {
// Exception can be thrown from here.
System.out.println("Found! Class: " + entityPersister.getEntityName() + "; property: " + entityPersister.getPropertyNames()[i]);
}
}
}
}
This validation can be performed on server startup or in an integration test.
The advantage of this approach is that you don't have to modify the defined behavior of Hibernate; it just acts as a reminder that you forgot not to cascade deletion to the FooEntity.
Regarding the tests, yes, I know that the OP explicitly said that tests are not an acceptable solution for this use case (and I personally agree with it in general). But these kinds of automatic tests may be useful because you write them and forget about them; you don't have to update the tests whenever you add a new mapping or modify an existing one (which defeats the purpose of the tests because you may forget or oversee to adopt the tests for each possible use case).
For starters I think you do understand what you're asking, you've just settled on a specific solution that many people, myself included, are questioning. It's not inattentiveness...it's trying to solve your actual problem.
If you really want to stop the CascadeType.ALL value on annotations from having its documented effect, instead of verifying that CascadeType.ALL is not used where it shouldn't be (and validating those expectations via unit tests), then extend the DefaultDeleteEventListener and override the deleteEntity method to always pass false to the super implementation for the isCascadeDeleteEnabled flag.
If you want a solution that has some semblance of standard, expected behavior, then define relationship that should do cascading deletes at the schema level, and establish best practices to only use the CascadeTypes that you care about in your code. Maybe that's PERSIST and MERGE, maybe you're using save and update functionality of session factory and so you need to use the Hibernate-specific #CascadeType annotation .
Can't you just remove cascade attribute from #ManyToOne annotation if you don't want to cascade changes to objects associated with the one that is really changed?
The most reliable way to catch any kind of programming error is to write unit tests.
If you practice Test Driven Development you will minimise the chances of "forgetting" to do it.

Orphan deletion in Hibernate (when have multiple mapped objects)

I've got this structure of project:
class UserServiceSettingsImpl {
...
#ManyToOne
private UserImpl user;
#ManyToOne
private ServiceImpl service;
...
}
class ServiceImpl {
....
#OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL, mappedBy = "service", orphanRemoval = true)
private Set<UserServiceSettingsImpl> userServiceSettings;
....
}
class UserImpl {
....
#OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "user", orphanRemoval = true)
private Set<UserServiceSettingsImpl> serviceSettings;
....
}
I am trying to delete Service and everything that belongs to it (UserServiceSettingsImpl), but accidentally, this settings are not being removed (I suppose because they are not orphans since UserImpl has them too). So the thing is: is there a way to delete Settings, without deleting them from user manually (there could be a lot of users with a lot of settings, iterating through it could take a lot of time) ?
You are correct in why the UserServiceSettings are not being deleted when deleting a service if they are also referenced by a User. They are not orphans and will have to be deleted explicitly per your business logic.
Three ideas:
Use the ORM to batch delete entities.
It's not much different than iterating, but might be optimized while still using the ORM.
List settingsCopy = new ArrayList<>(service.getSettings());
service.getSettings().clear();
myDao.deleteAll(settingsCopy);
Use direct HSQL/SQL to batch delete.
This depends on what framework you are using, but generally would be something like this, probably in your repository/dao class:delete from UserServiceSettingsImpl o where o.service.id = ? However, hibernate does not support JOINs when deleting, afaik, so this doesn't work as written. It's generally necessary to rework the HSQL to use a "delete where id IN(...)" type format.
Setup CASCADE DELETEs and CASCADE UPDATEs in your database DDL, outside of the ORM framework. (Not recommended.)
However, the last two options have problems if there is chance that service's and user's UserServiceSettings can be modified at same time via multiple threads (even with correct transaction boundaries), or if those entities will be used within the orm context after the delete without a reload. In that case, you will likely run in to unexpected and sporadic errors with the last two approaches, and instead, should iterate the settings and delete via the ORM, even if it is inefficient.
Even with the first approach, it can be tricky to avoid errors in highly concurrent environments when deleting shared entities.
You're correct that you cannot delete them in any kind of automatic way - they will never be orphans. I think the best you can do is just write yourself a helper method. e.g. if you have a ServiceDao class, you would just add a helper as:
public void deleteServiceAndSettings(Service service) {
for (UserServiceSettings setting : service.getUserServiceSettings()) {
session.delete(setting);
}
session.delete(service);
}

Bulk Insert via Spring/Hibernate where ids are needed

I have to do bulk inserts, and need the ids of what's being added. This is a basic example that shows what I am doing (which is obviously horrible for performance). I am looking for a much better way to do this.
public void omgThisIsSlow(final Set<ObjectOne> objOneSet,
final Set<ObjectTwo> objTwoSet) {
for (final ObjectOne objOne : objOneSet) {
persist(objOne);
for (final ObjThree objThree : objOne.getObjThreeSet()) {
objThree.setObjOne(objOne);
persist(objThree);
}
for (final ObjectTwo objTwo : objTwoSet) {
final ObjectTwo objTwoCopy = new ObjTwo();
objTwoCopy.setFoo(objTwo.getFoo());
objTwoCopy.setBar(objTwo.getBar());
persist(objTwoCopy);
final ObjectFour objFour = new ObjectFour();
objFour.setObjOne(objOne);
objFour.setObjTwo(objTwoCopy);
persist(objFour);
}
}
}
In the case above persist is a method which internally calls
sessionFactory.getCurrentSession().saveOrUpdate();
Is there any optimized way of getting back the ids and bulk inserting based upon that?
Thanks!
Update: Got it working with the following additions and help from JustinKSU
import javax.persistence.*;
#Entity
public class ObjectFour{
#ManyToOne(cascade = CascadeType.ALL)
private ObjectOne objOne;
#ManyToOne(cascade = CascadeType.ALL)
private ObjectTwo objTwo;
}
// And similar for other classes and their objects that need to be persisted
If you define the relationships using annotations and define appropriate cascading, you should be able set the object relationships in the objects in java and persist it all in one call. Hibernate will handle setting the foreign keys for you.
Documentation -
http://docs.jboss.org/hibernate/annotations/3.5/reference/en/html/entity.html#entity-mapping-association
An example annotation on a parent object would be
#OneToMany(mappedBy = "foo", fetch = FetchType.LAZY, cascade=CascadeType.ALL)
On the child object you would do the following
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "COLUMN_NAME", nullable = false)
I'm not sure but Hibernate makes bulk inserts/updates. The problem I understand is you need to persist the parent object in order to assign the reference to the child object.
I would try to persist all the "one" objects. And then, iterate over all their "three" objects and persist them in a second bulk insertion.
If your tree has three levels you can achieve all the insertions in 3 batchs. Pretty decent I think.
Assuming that you are just looking at getting a large amount of data persisted in one go and your problem is that you don't know what the IDs are going to be as the various related objects are persisted, one possible solution for this is to run all your inserts (as bulk inserts) into ancillary tables (one per real table) with temporary IDs (and some session ID) and have a stored procedure perform the inserts into the real tables whilst resolving the IDs.

Categories