I am working on a huge application with a complex database schema. I am using Spring and Hibernate for the development.
I wanted to know how to soft-delete an entity(where the active field is there in a superclass rather than having in all the entities). I implemented the suggestion provided here.
Below is the structure of my entities and hibernate util classes
Base Entity
#MappedSuperclass
public abstract class BaseEntity<TId extends Serializable> implements IEntity<TId> {
#Basic
#Column(name = "IsActive")
protected boolean isActive;
public Boolean getIsActive() {
return isActive;
}
public void setIsActive(Boolean isActive) {
isActive= isActive;
}
}
Child Entity :
#Entity(name="Role")
#Table(schema = "dbo")
public class Role extends BaseEntity {
//remaining fields
//1. foreign key reference to another entity
//2. List<Child> entities
//3. Self reference fields
}
Hibernate Util Class:
public void remove(TEntity entity) {
//Note: Enterprise data should be never removed.
entity.setIsActive(false);
sessionFactory.getCurrentSession().update(entity);
}
Now I have a few requirements with this which I am not able to solve now.
When I delete 'Role' entity, all the children entities should also get deleted (soft delete only for all) :-> Do I need to fetch the parent entity, iterate through the children and delete one by one ?
Role has a foreign-key reference with another entity 'Department'. If a department is deleted, the roles associated should get deleted conditionally(ie, only if the caller decides: in some cases, we dont want to delete the referred entities).
There are some self-referencing column like 'ParentRoleId'. If a Role is deleted, all its referenced roles also should be deleted. -> Do I need to fetch the ID and then delete all the self-referenced children entities and then delete each?
eg: Department can have a parent department(which is by using the field : parentdeptid). If I delete a parent department, all the sub-departments should get deleted
If anyone has any suggestions on how to do this, please let me know.
Related
I am building a blog system, and like to provide the upvote/downvote feature for the blog. Since the vote count number of blog should be persisted, i choose to use MySQL to act as the data store. And i use Spring JPA(Hibernate) to do the ORM job. Here's my data objects:
class Blog{
// ...
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#OneToOne(optional = false, fetch = FetchType.EAGER)
#PrimaryKeyJoinColumn
private BlogVoteCounter voteCounter;
}
And the counter class:
#Entity
public class BlogVoteCounter extends ManuallyAssignIdEntitySuperClass<Long> {
#Id
private Long id;
private Integer value;
}
The reason why i separate the BlogVoteCounter from Blog is that i think the voteCount field will be modified by a totally different frequency comparing to other fields of Blog, since i want to use cache to cache the Blog, following this guide, i choose to separate them.
However, since the VoteCount field might be always needed when return the Blog object to the front end, and to avoid the n+1 problem, i declared the BlogVoteCounter field in Blog class with EAGER fetch type.
I've already seen this article. Thus according to my personal comprehension, i use unidirectional relationship and only declare OneToOne in the Blog side.
However, when i examine the query, it turns out that jpa will still trigger a secondary query to retrieve BlogVoteCounter from database without simply using a join when use findAll method on BlogRepository.
select
blogvoteco0_.id as id1_2_0_,
blogvoteco0_.value as value2_2_0_
from
blog_vote_counter blogvoteco0_
where
blogvoteco0_.id=?
So how should i config, to always make the BlogVoteCounter field in Blog be fetched eagerly.
The usage of ManuallyAssignIdEntitySuperClass is following the Spring JPA doc, since i manually assign id for BlogVoteCounter class.
#MappedSuperclass
public abstract class ManuallyAssignIdEntitySuperClass<ID> implements Persistable<ID> {
#Transient
private boolean isNew = true;
#Override
public boolean isNew() {
return isNew;
}
#PrePersist
#PostLoad
void markNotNew(){
this.isNew = false;
}
}
And the BlogRepository is derived from JpaRepository
public interface BlogRepository extends JpaRepository<Blog, Long>{
// ...
}
I trigger the query by using findAll method, but using findById or other conditional query seems no difference.
When to fetch vs How to fetch : fetchType defines when to fetch the association ( instantlyvs later when someone access) the association but not how to fetch the association(i.e second select vs join query). So from JPA Spec point of view, EAGER means dont wait until someone access that field to populate it but JPA provider is free to use JOIN or second select as long as they do it immediately.
Even though they are free to use join vs second select, still I thought they should have optimised for join in the case of EAGER. So interested in finding out the logical reasoning for not using the join
1. Query generated for repository.findById(blogId);
select
blog0_.id as id1_0_0_,
blog0_.vote_counter_id as vote_cou2_0_0_,
blogvoteco1_.id as id1_1_1_,
blogvoteco1_.value as value2_1_1_
from
blog blog0_
inner join
blog_vote_counter blogvoteco1_
on blog0_.vote_counter_id=blogvoteco1_.id
where
blog0_.id=?
2. Updated Mapping
public class Blog {
#Id
private Long id;
#ManyToOne(optional = false, cascade = ALL, fetch = FetchType.EAGER)
#PrimaryKeyJoinColumn
private BlogVoteCounter voteCounter;
public Blog() {
}
public Blog(Long id, BlogVoteCounter voteCounter) {
this.id = id;
this.voteCounter = voteCounter;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public BlogVoteCounter getVoteCounter() {
return voteCounter;
}
public void setVoteCounter(BlogVoteCounter voteCounter) {
this.voteCounter = voteCounter;
}
}
3. Issues with current Mapping
As per your mapping, it is impossible to create blog and votecounter as it causes a chicken and egg problem.
i.e
blog and votecounter need to share the same primary key
blog's primary key is generated by database.
so in order to get the primary key of blog and assign it to votecounter as well, you need to store blog first
but the #OneToOne relationship is not optional, so you cannot store blog first alone
4.Changes
Either need to make the relationship optional so blog can be stored first, get the id, assign to BlogVoteCounter and save the counter
Or Don't auto generate Id and manually assign the id so blog and votecounter can be saved at the same time.(I have gone for this option but you can do first option)
5.Notes
default repository.findAll was generating 2 queries so I overridden that method to generate one join query
public interface BlogRepository extends JpaRepository<Blog, Long> {
#Override
#Query("SELECT b from Blog b join fetch b.voteCounter ")
List<Blog> findAll();
}
select
blog0_.id as id1_0_0_,
blogvoteco1_.id as id1_1_1_,
blog0_.vote_counter_id as vote_cou2_0_0_,
blogvoteco1_.value as value2_1_1_
from
blog blog0_
inner join
blog_vote_counter blogvoteco1_
on blog0_.vote_counter_id=blogvoteco1_.id
In my Spring project and using Google App Engine, I'm trying to get an entity from datastore, but in this case this entity has a #Parent relation. I only have the id from the entity, a this point unaware information about Parent relation.
I tried different querys, using ancestor filterKey, at this moment I have this:
#Override
public House getNotRestrictions(Long id){
return objectifyService.ofy().load().type(House.class).filterKey(Key.create(House.class, id)).first().now();
}
My model is something like this:
#Entity
public class House {
#Id
public Long id;
//other attributes
#Index
#Parent
public Key<User> createdBy;
//methods getter and setters
}
When I execute the query, it returns to me and null entity. But the id into the datastore exists.
Every entity has a key that includes its kind, ID/name, and kind + ID/name of all of its ancestors. If you create a key without passing ancestor information, this key will be different from the entity you are trying to retrieve.
Also note that you can have many entities with the same kind and ID, if they have different parents.
There are only two ways:
//1. creating the full key with parent:
Key<House> houseKey = Key.create(Key.create(User.class, userId), House.class, id);
//2. or using the webSafeString key representation which contains the whole key path including namespace and parents:
String webSafeKey = "<encoded key as string>";
Key<House> houseKey = Key.<House>create(webSafe)
//Then you can fetch the entity:
House house = ofy().load().key(houseKey).now();
Since you mentioned you don't know the parent. You could start using the webSafeString instead - see Key.toWebSafeString() method
I have a Store/Clerks classes in my application, that are related via the "storeId" foreign key in the "clerks" DB table , and with the Hibernate annotations given in the following code:
Store.java :
#Entity
#Audited
#Table(name="stores")
Public Class Store {
private Set<Clerks> clerks;
//....
#OneToMany(fetch = FetchType.LAZY, mappedBy = "store")
public Set<Clerks> getClerks() {
return clerks;
}
}
Clerk.java:
#Entity
#Audited
#Table(name="clerks")
Public Class Clerk {
private Store store;
//....
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "storeId",referencedColumnName = "storeId")
public Store getStore() {
return store;
}
}
When I'm inserting (persisting) the new Clerk, Envers makes entries in audit tables of both entities ("stores_aud" and "clerks_aud") .
But, when I'm updating the existing Clerk, it only makes an entry in the "clerks_aud" table.
Can anyone explain to me, why this is happening, and how to enforce Envers to behave the same in both of the cases (insert and update)?
Thank you
When you add a new Clerk to Store, Store#clerkscollection is altered, which results in new audit entry for Store. When Clerk is changed, no fields of Store are changed, so no audit entry is generated for it, just for Clerk.
If you want to also generate audit entry for Store when Clerk is updated, you will have to handle it yourself. One common solution for this is to have something like lastUpdated column on Store, which you would update whenever something has changed.
I am using Spring and Hibernate for my application.
I am only allowing logical delete in my application where I need to set the field isActive=false. Instead of repeating the same field in all the entities, I created a Base Class with the property and getter-setter for 'isActive'.
So, during delete, I invoke the update() method and set the isActive to false.
I am not able to get this working. If any one has any idea, please let me know.
Base Entity
public abstract class BaseEntity<TId extends Serializable> implements IEntity<TId> {
#Basic
#Column(name = "IsActive")
protected boolean isActive;
public Boolean getIsActive() {
return isActive;
}
public void setIsActive(Boolean isActive) {
isActive= isActive;
}
}
Child Entity
#Entity(name="Role")
#Table(schema = "dbo")
public class MyEntity extends BaseEntity {
//remaining entities
}
Hibernate Util Class
public void remove(TEntity entity) {
//Note: Enterprise data should be never removed.
entity.setIsActive(false);
sessionFactory.getCurrentSession().update(entity);
}
Try to replace the code in setIsActive method with:
public void setIsActive(Boolean isActive) {
this.isActive = isActive;
}
in your code the use of variable name without this could be ambiguos...
I think you should also add #MappedSuperclass annotation to your abstract class to achieve field inheritance.
The issue with the proposed solution (which you allude to in your comment to that answer) is that does not handle cascading delete.
An alternative (Hibernate specific, non-JPA) solution might be to use Hibernate's #SQLDelete annotation:
http://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/querysql.html#querysql-cud
I seem to recall however that this Annotation cannot be defined on the Superclass and must be defined on each Entity class.
The problem with Logical delete in general however is that you then have to remember to filter every single query and every single collection mapping to exclude these records.
In my opinion an even better solution is to forget about logical delete altogether. Use Hibernate Envers as an auditing mechanism. You can then recover any deleted records as required.
http://envers.jboss.org/
You can use the SQLDelete annotation...
#org.hibernate.annotations.SQLDelete;
//Package name...
//Imports...
#Entity
#Table(name = "CUSTOMER")
//Override the default Hibernation delete and set the deleted flag rather than deleting the record from the db.
#SQLDelete(sql="UPDATE customer SET deleted = '1' WHERE id = ?")
//Filter added to retrieve only records that have not been soft deleted.
#Where(clause="deleted <> '1'")
public class Customer implements java.io.Serializable {
private long id;
...
private char deleted;
Source: http://featurenotbug.com/2009/07/soft-deletes-using-hibernate-annotations/
We have 2 tables (an active table and an archive table) which have the same structure (ex. Employee and EmployeeArchive). To be able to leverage common code to use results for both tables we have an abstract parent class that defines all the methods and annotations.
We like to be able to perform queries that will use the same query for both tables and union the results together.
We have another entity/table (ex. Organization) with a onetomany/manytoone bidirectional relationship with Employee; Organization has a List of Employees and every employee has an organization.
When getting the employees of an organization via the association we only want the employees from the active table not the archive.
Is there a way to achieve what we are attempting or a viable workaround?
We have tried various implementations of #MappedSuperclass, #Entity/#InheritanceType.TABLE_PER_CLASS to try to achieve what we want. Each implementation would nearly achieve what we want but not completely. For example to be able to query both tables we could have an abstract parent Entity with InheritanceType.TABLE_PER_CLASS but then we could not have the mappedBy relationship to Employee in the Organization. We can use a MappedSuperclass as the parent to be able to have the correct relationship but then we cannot query both the Archive and Active tables via the union.
Here is basically what we are trying to layout:
#Entity
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public abstract class AbstractEmployee {
#ManyToOne
#JoinColumn(name="employeeId", nullable=false)
Organization org;
...
}
#Entity
public class Employee extends AbstractEmployee {
}
#Entity
public class EmployeeArchive extends AbstractEmployee {
}
#Entity
public class Organization {
#OneToMany(cascade=ALL, mappedBy="org")
List<Employee> employees;
...
}
Code
public List<AbstractEmployee> getAllEmployees()
{
Query query = em.createQuery("SELECT e FROM AbstractEmployee e where e.name = ‘John’", AbstractEmployee.class);
return query.getResultList();
}
public List<Organization> getOrganizations()
{
Query query = em.createQuery("SELECT e FROM Organization o ", Organization.class);
List<Organization> orgs = query.getResultList();
// fetch or eager fetch the Employees but only get the ones from the active employee table
return orgs;
}
We also tried to have the parent class extend the MappedSuperclass and put the implementation and annotations in the MappedSuperclass but we get an AnnotationException for the relationship of the Organization
#MappedSuperclass
public abstract class AbstractMapped {
#ManyToOne
#JoinColumn(name="employeeId", nullable=false)
Organization org;
}
#Entity
#Inheritance(#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS))
public abstract class AbstractEmployee extends AbstractMapped {
... `Constructors` ...
}
On deployment we get the following exception:
Caused by org.hibernate.AnnotationException: mappedBy reference an unknown target entity property: Employee.org in Organizaztion.employees
at org.hibernate.cfg.annotations.CollectionBinder.bindStarToManySecondPass(CollectionBinder.java:685)
You can do this by changing the mapping of Organization to Employee so that it uses a relationship table, rather than having the org field in the Employee table. See the example in the Hibernate documentation, which for you would look something like:
#Entity
public class Organization {
#OneToMany(cascade=ALL)
#JoinTable(
name="ACTIVE_EMPLOYEES",
joinColumns = #JoinColumn( name="ORGANIZATION_ID"),
inverseJoinColumns = #JoinColumn( name="EMPLOYEE_ID")
)
List<Employee> employees;
...
}
However, I have to say that I think having two tables to represent current vs archived Employees is a bad idea. This sounds to me like a 'soft delete' kind of situation, which is better handled with an in-table flag (IS_ACTIVE, or something). Then you don't have these odd abstract classes to do your queries, multiple tables with the same kind of data, etc etc. A bit of a description of this strategy is here.
Then you can use the non-join table mapping that you've already got, and use the #Where annotation to limit the employees in an organization to ones that have IS_ACTIVE set to true. An example of this approach is here.
This is one of the annoying things about hibernate. The way to do this is to have another abstract class, AbstractMapped, which simply looks like this:
#MappedSuperclass
public abstract class AbstractMapped {
}
and then have AbstractEmployee extend AbstractMapped. Then you have AbstractEmployee as both an Entity and a Mapped Superclass, even though the two tags are mutually exclusive.
AbstractEmployee should be the #MappedSuperClass, and should not be an #Entity, which creates a table for the class.
Organization should contain a List<AbstractEmployee> not of Employee.