I'm running into a problem with Hibernate where when trying to delete a group of Entities I encounter the following error:
javax.persistence.EntityNotFoundException: deleted entity passed to persist: [com.locuslive.odyssey.entity.FreightInvoiceLine#<null>]
These are not normally so difficult to track down as they are usually caused by an entity being deleted but not being removed from a Collection that it is a member of.
In this case I have removed the entity from every list that I can think of (it's a complex datamodel). I've put JBoss logging into Trace and I can see the collections that are being cascaded. However I can't seem to find the Collection containing the entity that I'm deleting.
Does anyone have any tips for resolving this particular Exception? I'm particularly looking for ways to identify what might be the owning Collection.
Thanks.
Finally found it, and it's exactly the sort of frustrating "find the list" that I had expected.
The code that was performing the delete was extending Seam's EntityHome.
public class FreightInvoiceHome extends EntityHome<FreightInvoice> {
public void deleteLine(FreightInvoiceLine freightInvoiceLine) {
getEntityManager().remove(freightInvoiceLine);
freightInvoiceLine.getShipInstrLineItem().getFreightInvoiceLines().remove(freightInvoiceLine);
/* These next two statements are effectively performing the same action on the same FreightInvoice entity
* If I use the first one then I get the exception. If I use the second one then all is ok.
*/
getInstance().getFreightInvoiceLines().remove(freightInvoiceLine);
//freightInvoiceLine.getFreightInvoice().getFreightInvoiceLines().remove(freightInvoiceLine);
}
}
I had suspected that this might have been caused by a dodgy equals() / hashcode() but after replacing both there is no difference.
Happy to change the accepted to someone else if they can explain the difference between the two.
I would suggest doing the
getEntityManager().remove(freightInvoiceLine);
as the last step. I think it is a good practice, first to remove the child from any collections and then actually delete it. It will save headaches in many cases.
Related
I have tried searching on Stack Overflow and at other websites the pros, cons and conveniences about using Sets vs Lists but I really couldn't find a DEFINITE answer for when to use this or that.
From Hibernate's documentation, they state that non-duplicate records should go into Sets and, from there, you should implement your hashCode() and equals() for every single entity that could be wrapped into a Set. But then it comes to the price of convenience and ease of use as there are some articles that recommend the use of business-keys as every entity's id and, from there, hashCode() and equals() could then be perfectly implemented for every situation regardless of the object's state (managed, detached, etc).
It's all fine, all fine... until I come across on lots of situations where the use of Sets are just not doable, such as Ordering (though Hibernate gives you the idea of SortedSet), convenience of collectionObj.get(index), collectionObj.remove(int location || Object obj), Android's architecture of ListView/ExpandableListView (GroupIds, ChildIds) and on... My point is: Sets are just really bad (imho) to manipulate and make it work 100%.
I am tempted to change every single collection of my project to List as they work very well. The IDs for all my entities are generated through MYSQL's auto-generated sequence (#GeneratedValue(strategy = GenerationType.IDENTITY)).
Is there anyone out the who could in a definite way clear up my mind in all these little details mentioned above?
Also, is it doable to use Eclipse's auto-generated hashCode() and equals() for the ID field for every entity? Will it be effective in every situation?
Thank you very much,
Renato
List versus Set
Duplicates allowed
Lists allow duplicates and Sets do not allow duplicates. For some this will be the main reason for them choosing List or Set.
Multiple Bag's Exception - Multiple Eager fetching in same query
One notable difference in the handling of Hibernate is that you can't fetch two different lists in a single query.
It will throw an exception "cannot fetch multiple bags". But with sets, no such issues.
A list, if there is no index column specified, will just be handled as a bag by Hibernate (no specific ordering).
#OneToMany
#OrderBy("lastname ASC")
public List<Rating> ratings;
One notable difference in the handling of Hibernate is that you can't fetch two different lists in a single query. For example, if you have a Person entity having a list of contacts and a list of addresses, you won't be able to use a single query to load persons with all their contacts and all their addresses. The solution in this case is to make two queries (which avoids the cartesian product), or to use a Set instead of a List for at least one of the collections.
It's often hard to use Sets with Hibernate when you have to define equals and hashCode on the entities and don't have an immutable functional key in the entity.
furthermore i suggest you this link.
For example, I have the following entity:
class User{
...
private Set questions;
...
}
When I operate the domain model:
user.questions.add(...);
Hibernate will load ALL the questions of this collection, even if I set the collection to LAZY. How can I change this behavior?
You'll have to annotate the collection with
#LazyCollection(LazyCollectionOption.EXTRA)
TL;DR
Don't do this, load all the collection on request.
Details
I think you should reconsider your desire to avoid loading the collection when its elements are updated via add() method call. Here are my arguments:
When you add element, the result (i.e. whether element added or not) might depend on type of the collection. For Set - it definetely depends on collection contents.
In terms of business logic, your Set represents some questions related to user. Let's imagine you achieved the result you want - first five questions are in the collection, the rest ten are not. What is the business meaning of the collection then? Sounds really questionable.
If you consider my arguments bad, feel free to use the techniques described in other answers.
So actual problem description: when I apply cascading there is a performance issue because there can be many questions.
My answer would then be: then don't use cascading to save questions, persist the question using a regular EntityManager.persist() call.
Pretty obvious, right?
If the resultSet size is very huge you can do it in batches by setting max limits
query.setMaxResults(int maxResults)
I have a class which models FK relationship. It has 2 lists in it. These lists contains the column names of the Parent Table & the Child Table respectively. These lists are passes by the client to me. Now before creating my FK object, I think it is necessary to do following checks (in order):
Check if lists are not null.
Check if lists contains null.
If a list contains duplicates columns?
Size of both the lists are equal.
So you can see there will be total 7 checks. Is it OK to have so many checks?
If it is OK to have these many checks, is there any pattern to handle such cases (with high no. of validation checks)?
If it is not OK, then what should I do? Should I just document these conditions as part of contract & mention that API will produce nonsensical results if this contract is violated?
Edit : Basically, I am trying to takes these 2 lists & produce a Database specific Query. So, it is kind of important to have this object built correctly.
Like everybody says, it depends on you. There is no such fixed/standard guideline for this. But to make it simple, you must have to put all your validation logic in one place, so that it remains readable and easy to change.
A suggestion can be, as you said, all of your validation logic seems to be very business oriented..by which I mean the end user should not be bothered about your db configuration. Let assume your class name, FKEntity. So if you follow the entity concept then you can put the validation logic in FKEntity.validate() (implementing an interface Validatable) which will validate the particular entity...this is for those kind of validation logic which applies to all FKEntity type objects in same way. And if you need any validation logic that compares/process different FKEntity depending on each other (e.g. if there is one FKEntity with some value "x" then no other entity can have "x" as their values, if they do, then you can not allow the entire entity list to persist), then you can put the logic in your Service layer.
Inteface Validatable { void validate() throws InvalidEntityException; }
Class FKEntity implements Validatable {
//..
public void validate() throws InvalidEntityException {
//your entity specific logic
}
}
Class FKDigestService {
public digestEntities() {
try {
for(FKEntity e : entityList)
e.validate();
//your collective validation logic goes here
} catch (EntityValidationException e) {//do whatever you want}
}
}
This will give you two advantages,
Your entity specific validation logic is kept in a single place (try to think most of the logic as entity specific logic)
Your collective logic is separated from entity logic, as you can not put these logic in your entity since these logic is only applicable when there is a collection of FKEntity, but not for single entity...it is business logic, not validation logic
I depends on you. There is no real argument against many checks. If your are developing an API, this can be very useful for other programmers. And it will make your own program more reliable.
I think the important point is, that you do your checks at one single point. You must have a clean and simple interface for your API. In this interface, it is ok to make checks. After these checks you could be sure that everything works.
What happens if you leaf the checks away? Will an exception be thrown somewhere or will the program just do something? If the program will just work and do something unpredictable, you should provide checks or things will begin to get strange. But if an exception will be thrown anyway, (I think) you can leaf the checks away. I mean, the program will get an exception anyway.
This is complex problem, so solution should be simplest possible to do not make it even more complicated and less understandable.
My approach would be:
some public method wrapping private method named something like doAllNeededListsValidationInFixedOrder() in which I'd create another private methods - each for every needed validation.
And ofc writing method like doAllNeededListsValidationInFixedOrder should be follow by some solid javadoc, even though it's not public.
If you want to go for pattern - the solution wouldn't be so straightforward. Basic thing to require checks in given order is to create lots or classes - every one for state telling that object is after one check, before another.
So you can achieve this with State pattern - treating every check as new state of object.
OR
You can use something like Builder pattern with forced order of methods invoked to create object. It is basically using a lot of interfaces to have every single (building) method (here validating) fired from different interface, to control order of them.
Going back to begining - using simple, well documenented and properly named method, that hides validating methods set, seems better for me.
If it is OK to have these many checks, is there any pattern to handle such cases (with high no. of validation checks)?
These checks become trivial if tackled from a data conversion point of view.
List from a client is actually any list of any possible elements
List from a client is to be converted to a well defined list of not duplicating not null elements
This conversion can be decomposed into several simple conversions ToNonNull, ToNonNullList, ToNonDuplicatingList
The last requirement is essentially conversion from two lists to one list of pairs ToPairs(ListA, ListB)
Put together it becomes:
ParentTableColumns = List1FromClient.
ToNonNull.
ToNonNullList.
ToNonDuplicatingList
ChildTableColumns = List2FromClient.
ToNonNull.
ToNonNullList.
ToNonDuplicatingList
ParentChildColumnPairs = List.
ToPairs(ParentTableColumns, ChildTableColumns)
If data from client is valid then all conversions succeed and valid result is obtained.
If data from client is invalid then one of the conversions fails and produces an error message.
I have a HQL query something ala'
SELECT myclass
FROM
MyClass myclass JOIN FETCH
myclass.anotherset sub JOIN FETCH
sub.yetanotherset
...
So, class MyClass has a property "anotherset" , which is a set containing instance of another class, lets call it MyClassTwo. And, class MyClassTwo has a property yetanotherset which is a set of a third type of class (with no further associations on it).
In this scenario, I'm having trouble with the hashCode implementation. Basically, the hashCode implementation of MyClassTwo, uses the "yetanotherset" property, and on the exact line it accesses the yetanothertest property, it fails with a LazyInitializationException.
org.hibernate.LazyInitializationException: illegal access to loading collection
I'm guessing, this is because the data from "yetanotherset" hasn't been fetched yet, but how do I fix this? I don't particularly like the idea of dumbing down the hashCode to ignore the property.
Additional question, does HQL ignore fetch=FetchType.EAGER as defined in XML or annotations, it seems like it does. But I cant verify this anywhere.
Implementing hashCode() using a mutable field is a bad idea: it makes storing the entity in a HashSet and modifying the mutable property impossible.
Implementing it in terms of a collection of other entities is an even worse idea: it forces the loading of the collection to compute the hashCode.
Choose a unique, immutable property (or set of properties) in your entity, and implement hashCode based on that. On last resort, you have the option of using the ID, but if it's autogenerated, you must not put it in a Set before the ID is generated.
This is hibernate's most famous exception and it is exactly as you described it. The session has been disconnected, transaction closed, and you are attempting to access this collection. JOIN FETCH in your HQL should force EAGER loading to occur regardless of whether than annotation is present.
I suspect that your annotations are malformed, you have missing or out of date jars, or some other problem of that type.
Bump your Hibernate logging level up to generate the SQL hibernate.SQL=debug and investigate exactly what SQL is being executed up to where you see this exception. This should indicate to you whether your hibernate configuration is behaving the way you think its configured.
Post more of your code and the logs and someone might be able to help you spot the error.
Is anyone aware of the validity of Hibernate's Criteria.list() and Query.list() methods returning multiple occurrences of the same entity?
Occasionally I find when using the Criteria API, that changing the default fetch strategy in my class mapping definition (from "select" to "join") can sometimes affect how many references to the same entity can appear in the resulting output of list(), and I'm unsure whether to treat this as a bug or not. The javadoc does not define it, it simply says "The list of matched query results." (thanks guys).
If this is expected and normal behaviour, then I can de-dup the list myself, that's not a problem, but if it's a bug, then I would prefer to avoid it, rather than de-dup the results and try to ignore it.
Anyone got any experience of this?
Yes, getting duplicates is perfectly possible if you construct your queries so that this can happen. See for example Hibernate CollectionOfElements EAGER fetch duplicates elements
I also started noticing this behavior in my Java API as it started to grow. Glad there is an easy way to prevent it. Out of practice I've started out appending:
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)
To all of my criteria that return a list. For example:
List<PaymentTypeAccountEntity> paymentTypeAccounts = criteria()
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)
.list();
If you have an object which has a list of sub objects on it, and your criteria joins the two tables together, you could potentially get duplicates of the main object.
One way to ensure that you don't get duplicates is to use a DistinctRootEntityResultTransformer. The main drawback to this is if you are using result set buffering/row counting. The two don't work together.
I had the exact same issue with Criteria API. The simple solution for me was to set distinct to true on the query like
CriteriaQuery<Foo> query = criteriaBuilder.createQuery(Foo.class);
query.distinct(true);
Another possible option that came to my mind before would be to simply pass the resulting list to a Set which will also by definition have just an object's single instance.