My domain model has a self-referencing bi-directional relationship with relationship management done in the entity:
#Entity
public class Users implements BaseEntity<String>, Serializable {
#Id
private String username;
#ManyToMany(cascade = {CascadeType.REFRESH, CascadeType.MERGE, CascadeType.PERSIST})
private List<User> associatedSenders;
#ManyToMany(mappedBy = "associatedSenders")
private List<User> associatedReceivers;
//
// Associated Senders
//
public List<User> getAssociatedSenders() {
if (associatedSenders == null) {
associatedSenders = new ArrayList<User>();
}
return associatedSenders;
}
public void addAssociatedSender(User sender) {
if (associatedSenders == null) {
associatedSenders = new ArrayList<User>();
}
associatedSenders.add(checkNotNull(sender));
if (!sender.getAssociatedReceivers().contains(this)) {
sender.addAssociatedReceiver(this);
}
}
public void removeAssociatedSender(User sender) {
if (associatedSenders == null) {
associatedSenders = new ArrayList<User>();
}
associatedSenders.remove(checkNotNull(sender));
if (sender.getAssociatedReceivers().contains(this)) {
sender.removeAssociatedReceiver(this);
}
}
public void setAssociatedSenders(List<User> senders) {
checkNotNull(senders);
if (associatedSenders == null) {
associatedSenders = new ArrayList<User>();
}
// first remove all previous senders
for (Iterator<User> it = associatedSenders.iterator(); it.hasNext();) {
User sender = it.next();
it.remove();
if (sender.getAssociatedReceivers().contains(this)) {
sender.removeAssociatedReceiver(this);
}
}
// now add new senders
for (User sender : senders) {
addAssociatedSender(sender);
}
}
//
// Associated Receivers
//
public List<User> getAssociatedReceivers() {
if (associatedReceivers == null) {
associatedReceivers = new ArrayList<User>();
}
return associatedReceivers;
}
/**
* <p><b>Note:</b> this method should not be used by clients, because it
* does not manage the inverse side of the JPA relationship. Instead, use
* the appropriate method at the inverse of the relationship.
*
* #param receiver
*/
protected void addAssociatedReceiver(User receiver) {
if (associatedReceivers == null) {
associatedReceivers = new ArrayList<User>();
}
associatedReceivers.add(checkNotNull(receiver));
}
/**
* <p><b>Note:</b> this method should not be used by clients, because it
* does not manage the inverse side of the JPA relationship. Instead, use
* the appropriate method at the inverse of the relationship.
*
* #param receiver
*/
protected void removeAssociatedReceiver(User receiver) {
if (associatedReceivers == null) {
associatedReceivers = new ArrayList<User>();
}
associatedReceivers.remove(checkNotNull(receiver));
}
}
When I add new user entities to the associatedSenders collection, everything works as expected. The table in the db gets updated correctly and the in-memory relationships are correct, as well. However, when I remove a user entity from from associatedSenders collection (or all entities from that collection), e.g. by doing a call like this:
List<User> senders = Collections.emptyList();
user.setAssociatedSenders(senders)
the database table gets updated correctly, but the next call to em.find(User.class, username), where username is the id of the user who previously was in the associatedSenders collection, reveals that the associatedReceivers collection (the inverse side) has not been correctly updated. That is, user is still in that collection. Only if I refresh the entity via em.refresh(), the collection is correctly updated. Looks like the entity manager does some caching here, but this behavior seems incorrect to me.
UPDATE Probably it's worth mentioning that I'm modifying the user entity in the frontend within a JSF managed bean, i.e. while the entity is in the detached state.
If you are modifying the object while it is detached, then you must merge() it back into the persistence unit. Since you are modifying the source and target objects, you must merge() both of the objects to maintain both sides of the relationship. Cascade merge is not enough as you have removed the objects, so there is nothing to cascade to.
You could also check the state of your objects after the merge, and before and after the commit.
Perhaps include your merge code.
the only explanation I can figure out, user object that you set empty list to its associatedSender field is not original cached object, it is just a copy ...
Related
I have a model like this:
class Message {
#Id
private UUID id;
// ...
#OneToMany(mappedBy = "messageId")
private List<Value> values;
}
class Value {
private UUID messageId;
}
The Value entities are being created in a JPA session, then, in another session, I create a Message in which I provide the id myself (which matches the messageId of existing Value).
After I have persisted the Message, when I try to call getValues() from it, I get null. What's the best way to solve this? Can I programmatically fetch the relation? Should I open another session?
Solution 1: explicitly initialize child entities during parent entity creation
Main idea of that solution is to create an additional method for loading Value entities by messageId in ValueRepository and use it explicitly to initialize values collection during Message entity creation.
Repository for loading Value entities:
public interface ValueRepository extends JpaRepository<ValueEntity, Long> {
#Query("SELECT v FROM ValueEntity v WHERE v.messageId = :messageId")
List<ValueEntity> findByMessageId(Long messageId);
}
Mesage creation and values collection initialization:
public Message createMessage() {
Message message = new Message();
message.setId(1L);
message.setValues(valueRepository.findByMessageId(message.getId()));
entityManager.persist(message);
return message;
}
Solution 2: perform Flush and Refresh
AfterMessage persists operation you can perform Flush operation, which will synchronize entity with database state, and then Refresh operation, which will reread the state of the given entity.
public Message createMessage() {
Message message = new Message();
message.setId(1L);
entityManager.persist(message);
entityManager.flush();
entityManager.refresh(message);
return message;
}
I think Solution 1 is preferable, it is better from a performance perspective because the flush operation can take additional time.
UPDATE:
In case of merge operation, use returned persisted entity for the refresh, instead of init object.
public Message createMessage() {
Message message = new Message();
message.setId(1L);
Message persistedMessage = entityManager.merge(message);
entityManager.flush();
entityManager.refresh(persistedMessage);
return persistedMessage;
}
Or better divide save and update operations
public Message saveOrUpdateMessage() {
Message message = entityManager.find(Message.class, 1L);
if (message == null) {
//handle new entity save
message = new Message();
message.setId(1L);
entityManager.persist(message);
entityManager.flush();
entityManager.refresh(message);
}
//handle existing entity update
ValueEntity valueEntity = new ValueEntity();
valueEntity.setId(2L);
valueEntity.setMessageId(message.getId());
entityManager.persist(valueEntity);
message.getValues().add(valueEntity);
return message;
}
I do have a model class:
public class LimitsModel {
private Long id;
private Long userId;
private Long channel;
}
I also have a unique constraint on my entity set on fields userId and channel. Throughtout the application, there's no chance those could duplicate.
The limits were added mid development, so we already had users and channels and had to create limits entity for every existing user. So we're creating them during some operation and there's no other place they're created. Here's how we create them:
List<LimitsModel> limits = userDAO.getUserLimits(userId, channel);
if(isNull(limits) || limits.isEmpty()){
List<limitsModel> limitsToSave = this.prepareLimits();
limits = userDAO.createOrUpdateLimits(limitsToSave);
}
.
.
.
other operations
What I'm getting is
Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (USER_LIMITS_UNIQUE) violated
Any clues what could be the case? I'm simply drawing the limits from the database, checking if they exist and if not creating them. Where's the place for unique constraint violation?
EDIT
createOrUpdateLimits just calls this method:
public void createOrUpdateAll(final Collection<?> objects) {
getHibernateTemplate().executeWithNativeSession(session -> {
Iterator<?> iterator = objects.iterator();
while (iterator.hasNext()) {
session.saveOrUpdate(iterator.next());
}
return null;
});
}
prepareLimits nothing complicated, a simple builder:
private List<LimitsModel> prepareLimits() {
List<LimitsModel> limitsToSave = LimitsModel.CHANNELS.stream()
.map(channel -> LimitsModel.builder()
.userId(UserUtils.getId())
.channel(channel)
.build())
.collect(Collectors.toList());
return scaLimitsToSave;
}
getUserLimits:
public List<LimitsModel> getUserLimits(Long userId, Long channel) {
return getHibernateTemplate().execute(session -> {
final Criteria criteria = session.createCriteria(LimitsModel.class)
.add(Restrictions.eq(LimitsModel.PROPERTY_USER_ID, userId));
if (nonNull(channel)){
criteria.add(Restrictions.eq(LimitsModel.PROPERTY_CHANNEL, channel));
}
return criteria.list();
});
}
The constraint is on userId, channel. There is a possibility that the block that gets the limits and then creates them is called twice. Shouldn't the new limits be already in the database when it's called the second time? Isn't the transaction commited already?
We are currently facing an 'issue' with new entities and the findDirty interceptor - and from our perspective it looks like a bug. However it might be also a feature - thus expertise and comments on this topic is appreciated.
Cheers
Christoph
Our environment:
Hibernate 4.2.7 (using JPA API 2.x)
The issue:
We have two entities UserBO and InternalOrganizationBO
The relationsship between the two objects is shown below.
Both objects have a boolean flag 'flushUpdates'.
This 'flushUpdate' flag is evaluated in our custom findDirty implementation (overwrite of EmptyInterceptor.findDirty).
If 'flushUpdate' is false the findDirty returns an empty array.
What we are doing:
1. We get the UserBO object from the database (thus in managed state)
2. We create a new InternalOrganizationBO object
3. We set the flushUpdates flag for the UserBO and InternalOrganizationBO to false
4. We set the InternalOrganizationBO object for the User
5. Call the entityManager.flush method
What we think is a bug:
The entityManager.flush method is triggering a SQL INSERT for the InternalOrganizationBO
ALTHOUGH the findDirty method returns for both objects an empty array AND we didn't
call an explcit persist for the InternalOrganizationBO.
(By the way: the 'findDirty' seems to work fine when both objects are already in the database and in a managed stated)
So the questions:
Is this really how the Hibernate framework should behave?
Shouldn't the 'empty array' returned from findDirty prevent an insert?
Are there other options to prevent Hibernate to do the insert?
--- Source Code Entity Classes:
#Entity
#SqlResultSetMapping(....)
#Table(schema="...", name="...")
#IdClass(UserBO_PK.class)
public class UserBO extends AbstractBO implements Serializable {
...
#ManyToOne(fetch=FetchType.LAZY, cascade={CascadeType.MERGE, CascadeType.DETACH, CascadeType.PERSIST})
#JoinColumn(name=UserBO.INTERNALORGANIZATION_ID)
protected InternalOrganizationBO internalOrganization;
...
#Transient
protected boolean flushUpdates = false;
public boolean isFlushUpdates() {
return flushUpdates;
}
...
}
#Entity
#SqlResultSetMapping(
name="InternalOrganizationBO",
entities = #EntityResult(entityClass=InternalOrganizationBO.class),
columns = { #ColumnResult (name="externalName")})
#Table(schema="...", name="...")
#IdClass(InternalOrganizationBO_PK.class)
public class InternalOrganizationBO extends AbstractBO implements Serializable {
...
#OneToMany(mappedBy="internalOrganization", fetch=FetchType.LAZY, cascade={CascadeType.MERGE, CascadeType.DETACH, CascadeType.PERSIST})
protected List<UserBO> users;
...
#Transient
protected boolean flushUpdates = false;
public boolean isFlushUpdates() {
return flushUpdates;
}
...
}
--- SourceCode TestCase
//load user objects
UserBO user = entityManager.find(UserBO.class, new UserBO_PK("testUser"));
//create internal organization
intOrg01 = new InternalOrganizationBO(intOrg01_id);
intOrg01.setCode(intOrg01_code);
intOrg01.setBusinessUnit(BusinessUnitEnum.empty);
//set all flush updates to false
intOrg01.setFlushUpdates(false);
user01.setFlushUpdates(false);
//add intOrg to user
user01.setInternalOrganization(intOrg01);
--- custom impl. for findDirty:
#Override
public int[] findDirty(Object entity, Serializable id, Object[] currentState, Object[] previousState,
String[] propertyNames, Type[] types) {
/**
* logic impl.
*/
if (entity instanceof AbstractBO) {
if (((AbstractBO) entity).isFlushUpdates()) {
return null;
} else {
logger.info("Ignoring flush for object: " + entity.getClass().getSimpleName() + ", "
+ entity.toString());
return new int[0];
}
}
}
//flush entityManager
//THIS triggers an insert of intOrg ! (no update to the User table)
entityManager.flush();
I have a bidirectional one-to-many relationship.
0 or 1 client <-> List of 0 or more product orders.
That relationship should be set or unset on both entities:
On the client side, I want to set the List of product orders assigned to the client; the client should then be set / unset to the orders chosen automatically.
On the product order side, I want to set the client to which the oder is assigned; that product order should then be removed from its previously assiged client's list and added to the new assigned client's list.
I want to use pure JPA 2.0 annotations and one "merge" call to the entity manager only (with cascade options). I've tried with the following code pieces, but it doesn't work (I use EclipseLink 2.2.0 as persistence provider)
#Entity
public class Client implements Serializable {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
for (ProductOrder order : this.orders) {
order.unsetClient();
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
// TODO doesn't work!
}
for (ProductOrder order : orders) {
order.setClient(this);
}
this.orders = orders;
}
// other fields / getters / setters
}
#Entity
public class ProductOrder implements Serializable {
#ManyToOne(cascade= CascadeType.ALL)
private Client client;
public void setClient(Client client) {
// remove from previous client
if (this.client != null) {
this.client.getOrders().remove(this);
}
this.client = client;
// add to new client
if (client != null && !client.getOrders().contains(this)) {
client.getOrders().add(this);
}
}
public void unsetClient() {
client = null;
}
// other fields / getters / setters
}
Facade code for persisting client:
// call setters on entity by JSF frontend...
getEntityManager().merge(client)
Facade code for persisting product order:
// call setters on entity by JSF frontend...
getEntityManager().merge(productOrder)
When changing the client assignment on the order side, it works well: On the client side, the order gets removed from the previous client's list and is added to the new client's list (if re-assigned).
BUT when changing on the client side, I can only add orders (on the order side, assignment to the new client is performed), but it just ignores when I remove orders from the client's list (after saving and refreshing, they are still in the list on the client side, and on the order side, they are also still assigned to the previous client.
Just to clarify, I DO NOT want to use a "delete orphan" option: When removing an order from the list, it should not be deleted from the database, but its client assignment should be updated (that is, to null), as defined in the Client#setOrders method. How can this be archieved?
EDIT: Thanks to the help I received here, I was able to fix this problem. See my solution below:
The client ("One" / "owned" side) stores the orders that have been modified in a temporary field.
#Entity
public class Client implements Serializable, EntityContainer {
#OneToMany(mappedBy = "client", cascade= CascadeType.ALL)
private List<ProductOrder> orders = new ArrayList<>();
#Transient
private List<ProductOrder> modifiedOrders = new ArrayList<>();
public void setOrders(List<ProductOrder> orders) {
if (orders == null) {
orders = new ArrayList<>();
}
modifiedOrders = new ArrayList<>();
for (ProductOrder order : this.orders) {
order.unsetClient();
modifiedOrders.add(order);
// don't use order.setClient(null);
// (ConcurrentModificationEx on array)
}
for (ProductOrder order : orders) {
order.setClient(this);
modifiedOrders.add(order);
}
this.orders = orders;
}
#Override // defined by my EntityContainer interface
public List getContainedEntities() {
return modifiedOrders;
}
On the facade, when persisting, it checks if there are any entities that must be persisted, too. Note that I used an interface to encapsulate this logic as my facade is actually generic.
// call setters on entity by JSF frontend...
getEntityManager().merge(entity);
if (entity instanceof EntityContainer) {
EntityContainer entityContainer = (EntityContainer) entity;
for (Object childEntity : entityContainer.getContainedEntities()) {
getEntityManager().merge(childEntity);
}
}
JPA does not do this and as far as I know there is no JPA implementation that does this either. JPA requires you to manage both sides of the relationship. When only one side of the relationship is updated this is sometimes referred to as "object corruption"
JPA does define an "owning" side in a two-way relationship (for a OneToMany this is the side that does NOT have the mappedBy annotation) which it uses to resolve a conflict when persisting to the database (there is only one representation of this relationship in the database compared to the two in memory so a resolution must be made). This is why changes to the ProductOrder class are realized but not changes to the Client class.
Even with the "owning" relationship you should always update both sides. This often leads people to relying on only updating one side and they get in trouble when they turn on the second-level cache. In JPA the conflicts mentioned above are only resolved when an object is persisted and reloaded from the database. Once the 2nd level cache is turned on that may be several transactions down the road and in the meantime you'll be dealing with a corrupted object.
You have to also merge the Orders that you removed, just merging the Client is not enough.
The issue is that although you are changing the Orders that were removed, you are never sending these orders to the server, and never calling merge on them, so there is no way for you changes to be reflected.
You need to call merge on each Order that you remove. Or process your changes locally, so you don't need to serialize or merge any objects.
EclipseLink does have a bidirectional relationship maintenance feature which may work for you in this case, but it is not part of JPA.
Another possible solution is to add the new property on your ProductOrder, I named it detached in the following examples.
When you want to detach the order from the client you can use a callback on the order itself:
#Entity public class ProductOrder implements Serializable {
/*...*/
//in your case this could probably be #Transient
private boolean detached;
#PreUpdate
public void detachFromClient() {
if(this.detached){
client.getOrders().remove(this);
client=null;
}
}
}
Instead of deleting the orders you want to delete you set detached to true. When you will merge & flush the client, the entity manager will detect the modified order and execute the #PreUpdate callback effectively detaching the order from the client.
i've hit a block once again with hibernate.I've posted numerous times on different aspects of the user and contact management that i've been building.
The sad thing is that i didn't really have the time to play with it and understand it better before actually starting working with it. Sorry but English is not my native language, i rather speak french. And again i've started coding in java in an autodidact way.i'm doing all of this by reading books and haven't gone to school for it. with time constraints it's hard to read a book from beginning to the end.
I'm not sure i should put every of my codes dealing with an issue here and from what i've learned from other forum is to post just the necessary and being concise.
So in my User model i have UserAccount class, Profile that holds details like name, preferences etc , AccountSession and Phone.
my contact management model have Contact and Group.
UserAccount has one-to-one association with Profile, one-to-many with AccountSession,contact and group, all bidirectional.the one-to-many association with phone is unidirectional because contact also has and unidirectional with Phone.
Contact has a bidirectional many-o-many with group and one-to-many with phone that i said earlier.
Group also has a many-to-many bedirectional with contact.
here are the mappings
// UserAccount
......
#OneToOne(targetEntity=UserProfileImpl.class,cascade={CascadeType.ALL})
#org.hibernate.annotations.Cascade(value=org.hibernate.annotations.CascadeType.DELETE_ORPHAN)
#JoinColumn(name="USER_PROFILE_ID")
private UserProfile profile;
#OneToMany(targetEntity=ContactImpl.class, cascade={CascadeType.ALL}, mappedBy="userAccount")
#org.hibernate.annotations.Cascade(value=org.hibernate.annotations.CascadeType.DELETE_ORPHAN)
private Set<Contact> contacts = new HashSet<Contact>();
#OneToMany(targetEntity=GroupImpl.class, cascade={CascadeType.ALL}, mappedBy="userAccount")
#org.hibernate.annotations.Cascade(value=org.hibernate.annotations.CascadeType.DELETE_ORPHAN)
private Set<Group> groups = new HashSet<Group>();
.......
//Group
#ManyToOne(targetEntity=UserAccountImpl.class)
#JoinColumn(name="USER_ACCOUNT_ID",nullable=false)
private UserAccount userAccount;
#ManyToMany(targetEntity=ContactImpl.class,cascade={CascadeType.PERSIST, CascadeType.MERGE})
#JoinTable(name="GROUP_CONTACT_MAP", joinColumns={#JoinColumn(name="GROUP_ID")},
inverseJoinColumns={#JoinColumn(name="CONTACT_ID")})
private Set<Contact> contacts = new HashSet<Contact>();
//Contact
....
#ManyToOne(targetEntity=UserAccountImpl.class)
#JoinColumn(name="USER_ACCOUNT_ID",nullable=false)
private UserAccount userAccount;
#ManyToMany(targetEntity=GroupImpl.class, mappedBy="contacts")
private Set<Group> groups=new HashSet<Group>();
....
// helper methods from group
public void addContact(Contact contact) {
try{
this.getContacts().add(contact);
contact.getGroups().add(this);
}catch(Exception e) {
}
}
//helper method from group
public void removeContact(Contact contact) {
contact.getGroups().remove(contact);
this.getContacts().remove(contact);
}
//helper method from contact
public void addGroup(Group group) {
try{
this.getGroups().add(group);
group.getContacts().add(this);
} catch(Exception e) {
e.printStackTrace();
}
}
//Helper method from group
public void removeGroup(Group group){
try{
group.getContacts().remove(this);
this.getGroups().remove(group);
} catch(Exception e) {
e.printStackTrace();
}
}
//UserAccount setter from Contact.All the children with many-to-one have the same
/**
* #param userAccount the userAccount to set
*/
public void setUserAccount(UserAccount userAccount) {
this.userAccount = userAccount;
}
I'ld like to pull the UserAccount by its email field which is an unique field in the UserAccount table.
In the UserAccountDAO the method i call to get the UserAccount is getUserAccountByEmail here below.So i expect this method to load all the children collections of the UserAccount namely its Contact collection, group collection.I want it in such a way that when UserAccount is loaded with Contacts collection each of the contact object has its reference with its belonging groups collection if any etc and vice versa.
public UserAccount getUserAccountByEmail(String email) {
// try {
logger.info("inside getUserAccountByEmail");
logger.debug(email);
Session session = (Session) this.getDBSession().getSession();
UserAccount user = (UserAccount) session.createCriteria(this.getPersistentClass())
.setFetchMode("contacts", FetchMode.SELECT) //recently added
.setFetchMode("groups", FetchMode.SELECT) // recently added
.add(Restrictions.eq("email", email))
.uniqueResult();
logger.debug(user);
return user;
// } catch(NonUniqueResultException ne) {
// logger.debug("Exception Occured: getUserAccountByEmail returns more than one result ", ne);
// return null;
// } catch(HibernateException he){
// logger.debug("Exception Occured: Persistence or JDBC exception in method getUserAccountByEmail ",he);
// return null;
// }catch(Exception e) {
// logger.debug("Exception Occured: Exception in method getUserAccountByEmail", e);
// return null;
// }
Since there has to be an UserAccount before any contact and groups, in my unit test when testing the saving of a contact object for which there must be an existing group i do this in order
a create userAccount object ua.
b create group object g1;
c create contact object c1;
d ua.addGroup(g1);
e c1.setUserAccount(ua);
f c1.addGroup(g1);
g uaDao.save(ua); // which saves the group because of the cascade
h cDao.save(c1);
Most of the time i use the session.get() from hibernate to pull c1 by its it id generated by hibernate and do all the assertions which works actually.
but in Integration test when i call getUserAccountByEmail with and without the setFetchMode and it returns the right object but then all the children collections are empty. i've tried the JOIN and the SELECT.the query string changes but then the result set is still the same. So this arises some questions :
1. What should i do to fix this?
2. the helper method works fine but it's on the parent side(i do it in the test).What i've been wondering about is that doing c1.setUserAccount(ua); is enough to create a strong relationship between UserAccount and contact.most of the time there will not be cases where i save the userAccount with contact but yet the helper method that set the association in both side and which is in UserAccount will not been called before i save the contact for a particular userAccount.So i'm little confused about that and suspecting that setting of the association is part of the why something is not working properly.and then calling session.get(UserAccount.class, ua.getID()) i think goes what i want and i'ld like getUserAccountByEmail to do the same.
3. ChssPly76 thinks the mapping has to be rewrite.So i'm willing to let you guide me through this.I really need to know the proper way to do this because we can't lean everything from a good book.So i you think i should change the mapping just show me how.and probable i'm doing things the wrong way without even been aware of that so don't forget i'm still learning java itself.THanks for the advise and remarks and thanks for reading this
I agree with you that it seems likely that the associations between your parent objects and their child collections are not getting persisted properly. I always like to start out by looking at what is in the database to figure out what's going on. After you run your test what do you see in the actual database?
It seems likely that one of two things is happening (using UserAccount as an example):
The items in the child collection are not getting saved to the database at all, in which case you'll be able to see in the database that there are no records associated with your UserAccount. This could be caused by saving the UserAccount object before you've added the child object to the UserAccount's collection.
The items in the child collection are getting saved to the database, but without the needed association to the parent object, in which case you'll see rows for your child items but the join column (ie 'userAccount' will be null). This could be caused by not setting the userAccount() property on the child object.
These are the two scenarios that I've run into where I've seen the problem you describe. Start by taking a look at what goes into your database and see if that leads you farther.