My project has recently discovered that Hibernate can take multiple levels of relationship and eager fetch them in a single join HQL to produce the filled object we need. We love this feature, figuring it would outperform a lazy fetch circumstance.
Problem is, we hit a situation where a single parent has about a dozen direct relationships, an a few subrelationships off of that, and a few of them have several dozen rows in a few instances. The result is a pretty large cross-product that results in the hql spinning it's wheels virtually forever. We turned logging up to 11 and saw more than 100000 iterations before we gave up and killed it.
So clearly, while this technique is great for some situations, it has limits like everything in life. But what is the best performing alternative in hibernate for this? We don't want to lazy-load these, because we'll get into an N+1 situation that will be even worse.
I'd ideally like to have Hibernate pre-fetch all the rows and details, but do it one relationship at a time, and then hydrate the right detail object to the right parent, but I have no idea if it does such a thing.
Suggestions?
UPDATE:
So we got the SQL this query generated, it turns out that I misdiagnosed the problem. The cross product is NOT that huge. We ran the same query in our database directly and got 500 rows returned in just over a second.
Yet we saw very clearly in the hibernate logging it making 100K iterations. Is it possible Hibernate can get caught in a loop in your relationships or something?
Or maybe this should be asked as a new question?
Our team uses the special strategy to work with associations. Collections are lazy, single relations are lazy too, except references with simply structure (for an example a countries reference). And we use fluent-hibernate to load what we need in a concrete situation. It is simply because of fluent-hibernate supports nested projections. You can refer this unit test to see how complex object net can be partially loaded. A code snippet from the unit test
List<Root> roots = H.<Root> request(Root.class).proj(Root.ROOT_NAME)
.innerJoin("stationarFrom.stationar", "stationar")
.proj("stationar.name", "stationarFrom.stationar.name")
.eq(Root.ROOT_NAME, rootName).transform(Root.class).list();
See also
How to transform a flat result set using Hibernate
Related
I am using Spring Boot and Hibernate.
Some complex logic, dictated by business, needs to use various nested fields, which traverse various DB relationships (again, some are NxN, Nx1, 1xN, 1x1).
I encountered the N+1 problem, and I solved it at first with HQL, but some queries need several joins and the result sets become unmanageable.
I started working on a custom utility that collects the ids of things that need to be fetched, fetches them all at once and uses the setters to then populate the fields on the starting objects. This utility works for ManyToOne relationships, but is still inefficient with ManyToMany relationships, because it falls back in the N+1 problem when I collect the ids (as it queries the join table once per object via the getter).
How can I solve this? Has this problem really not been solved yet? Am I missing some obvious settings that solves this automagically?
EDIT:
I made a toy example with some commentary: https://github.com/marcotama/n-1-queries-example
I had faced the same situation and I had 3 ways to solve it;
increase the fetchsize for the dependent attribute so that the queries are executed in batch
write a custom query for the purpose
define entity graph relations and map accordingly to attributes
I personally preferred the 3rd option as it was convenient to do that and was cleaner with spring data JPA.
you can refer to examples from the comments from the below answers:
Spring Data JPA And NamedEntityGraphs
What is the solution for the N+1 issue in JPA and Hibernate?
Write fetch logic on your own.
E.g You have author which has book, author_devices
You can join fetch author with books. Than you can separatly fetch author_devices using repository "where author_id IN (authorsList.stream().map(author.getId())". Than you should detach author and iterate author_devices and assign it to apropriate author devices list. I think it's only adequate solution for situations where you need to join-fetch more than 1 relation.
I have a Parent entity with a #OneToMany relationship with a Child entity. Most of the time, when I need to work with a Parent’s Child entities, I’m working with a single parent, so lazy fetching (FetchMode.SELECT) is appropriate.
However, I have a situation where I’m querying a large number of Parents (sometimes hundreds or even thousands), and I need to work with their Child entities. FetchMode.SELECT gives me a serious N+1 problem, so I need to do something different in this scenario. If I were doing this via JDBC, it’d be single query for the Parent records, then another query for all the Child records using an IN statement (where child.parentid in (?,?,?....)). I need live Hibernate entities, because Hibernate Search is going to call getChildren() as part of its indexing process.
The options I’ve considered are:
Criteria.setFetchMode(“children”, FetchMode.JOIN) (or join fetch in HQL) - this would give me a cartesian product, though, which is brutal with that many entities.
Adding #BatchSize to Parent.getChildren() - this would help for my big batch scenario, but it isn’t really the strategy I want to use for normal operations. It’d be perfect if I could set a batch size for the fetch in my Criteria/HQL, but I can’t find a way to do so.
Using FetchMode.SUBSELECT in Parent.getChildren() - much like #BatchSize, this would be great for my big batch scenario, but isn’t appropriate for normal operations, and I can’t find a way to use it with Criteria/HQL (Criteria and the entity annotations use different FetchMode enums, despite the duplicate name).
tldr; I have a one-to-many relationship with a lazy fetch mode, but sometimes I want to be able to efficiently load the relationship for many entities at once.
I am in the process of performance testing/optimizing a project that maps
a document <--> Java object tree <--> mysql database
The document, Java classes, database schema and logic for mapping is orchestrated with HyperJaxb3. The ORM piece of it is JPA provided by hibernate.
There are about 50 different entities and obviously lots of relationships between them. A major feature of the application is to load the documents and then reorganize the data into new documents; all the pieces of each incoming document eventually gets sent out in one outgoing document. While I would prefer to not be living in the relational world, the transactional semantics are a very good fit for this application - there is a lot of money and government regulation involved, so we need to make sure everything gets delivered exactly once.
Functionally, everything is going well and performance is decent (after a fair amount of tweaking). Each document is made up of a few thousand entities which end up creating a few thousand rows in the database. The documents vary in size, and insert performance is pretty much proportional to the number of rows that need to be inserted (no surprise there).
I see the potential for a significant optimization, and this is where my question lies.
Each document is mapped to a tree of entities. The "leaf" half of the tree contains lots of detailed information that is not used in the decisions for how to generate the outgoing documents. In other words, I don't need to be able to query/filter by the contents of many of the tables.
I would like to map the appropriate entity sub-trees to blobs, and thus save the overhead of inserting/updating/indexing the majority of the rows I am currently handling the usual way.
It seems that my best bet is to implement a custom EntityPersister and associate it with the appropriate entities. Is this the right way to go? The hibernate docs are not bad, but it is a fairly complex class that needs to be implemented and I am left with lots of questions after looking at the javadoc. Can you point me to a concrete, yet simple example that I can use as a starting point?
Any thoughts about another way to approach this optimization?
I've run in to the same problem with storing large amounts of binary data. The solution I found worked best is a denormalization of the Object model. For example, I create a master record, and then I create a second object that holds the binary data. On the master, use the #OneToOne mapping to the secondary object, but mark the association as lazy. Now the data will only be loaded if you need it.
The one thing that might slow you down is the outer join that hibernate performs with all objects of this type. To avoid it, you can mark the object as mandatory. But if the database doesn't give you a huge performance hit, I suggest you leave it alone. I found that Hibernate has a tendency to load the binary data immediately if I tried to get a regular join.
Finally, if you need to retrieve a lot of the binary data in a single SQL call, use the HQL fetch join command. For example: from Article a fetch join a.data where a.data is the one-to-one relationship to the binary holder. The HQL compiler will see this as an instruction to get all the data in a single sql call.
HTH
I was asked to have a look at a legacy EJB3 application with significant performance problems. The original author is not available anymore so all I've got is the source code and some user comments regarding the unacceptable performance. My personal EJB3 skill are pretty basic, I can read and understand the annotated code but that's all until know.
The server has a database, several EJB3 beans (JPA) and a few stateless beans just to allow CRUD on 4..5 domain objects for remote clients. The client itself is a java application. Just a few are connected to the server in parallel. From the user comments I learned that
the client/server app performed well in a LAN
the app was practically unusable on a WAN (1MBit or more) because read and update operations took much too long (up to several minutes)
I've seen one potential problem - on all EJB, all relations have been defined with the fetching strategy FetchType.EAGER. Would that explain the performance issues for read operations, is it advisable to start tuning with the fetching strategies?
But that would not explain performance issues on update operations, or would it? Update is handled by an EntityManager, the client just passes the domain object to the manager bean and persisting is done with nothing but manager.persist(obj). Maybe the domain objects that are sent to the server are just too big (maybe a side effect of the EAGER strategy).
So my actual theory is that too many bytes are sent over a rather slow network and I should look at reducing the size of result sets.
From your experience, what are the typical and most common coding errors that lead to performance issues on CRUD operations, where should I start investigating/optimizing?
On all EJB, all relations have been defined with the fetching strategy FetchType.EAGER. Would that explain the performance issues for read operations?
Depending on the relations betweens classes, you might be fetching much more (the whole database?) than actually wanted when retrieving entities?
is it advisable to start tuning with the fetching strategies?
I can't say that making all relations EAGER is a very standard approach. To my experience, you usually keep them lazy and use "Fetch Joins" (a type of join allowing to fetch an association) when you want to eager load an association for a given use case.
But that would not explain performance issues on update operations, or would it?
It could. I mean, if the app is retrieving a big fat object graph when reading and then sending the same fat object graph back to update just the root entity, there might be a performance penalty. But it's kinda weird that the code is using em.persist(Object) to update entities.
From your experience, what are the typical and most common coding errors that lead to performance issues on CRUD operations, where should I start investigating/optimizing?
The obvious ones include:
Retrieving more data than required
N+1 requests problems (bad fetching strategy)
Poorly written JPQL queries
Non appropriate inheritance strategies
Unnecessary database hits (i.e. lack of caching)
I would start with writing some integration tests or functional tests before touching anything to guarantee you won't change the functional behavior. Then, I would activate SQL logging and start to look at the generated SQL for the major use cases and work on the above points.
From DBA position.
From your experience, what are the typical and most common coding errors that lead to performance issues on CRUD operations, where should I start investigating/optimizing?
Turn off caching
Enable sql logging Ejb3/Hibernate generates by default a lots of extremely stupid queries.
Now You see what I mean.
Change FetchType.EAGER to FetchType.LAZY
Say "no" for big business logic between em.find em.persist
Use ehcache http://ehcache.org/
Turn on entity cache
If You can, make primary keys immutable ( #Column(updatable = false, ...)
Turn on query cache
Never ever use Hibernate if You want big performance:
http://www.google.com/search?q=hibernate+sucks
I my case a similar performance problem wasn't depending on the fetch strategy. Or lets say it was not really possible to change the business logic in the existing fetch strategies. In my case the solution was simply adding indices.
When your JPA Object model have a lot of relationsships (OneToOne, OneToMany, ...) you will typical use JPQL statements with a lot of joins. This can result in complex SQL translations. When you take a look at the datamodel (generated by the JPA) you will recognize that there are no indices for any of your table rows.
For example if you have a Customer and a Address object with an oneToOne relationship everything will work well on the first look. Customer and Address have an foreign key. But if you do selections like this
Select c from Customer as c where c.address.zip='8888'
you should take care about your table column 'zip' in the table ADDRESS. JPA will not create such an index for you during deployment. So in my case I was able to speed up the database performance by simply adding indices.
An SQL Statement in your database looks like this:
ALTER TABLE `mydatabase`.`ADDRESS` ADD INDEX `zip_index`(`IZIP`);
In the question, and in the other answers, I'm hearing a lot of "might"s and "maybe"s.
First find out what's going on. If you haven't done that, we're all just poking in the dark.
I'm no expert on this kind of system, but this method works on any language or OS.
When you find out what's making it take too long, why don't you summarize it here?
I'm especially interested to know if it was something that might have been guessed.
I use Hibernate/Spring and a MySQL Database for my data management.
Currently I display a tree-structure in a JTable. A tree can have several branches, in turn a branch can have several branches (up to nine levels) again, or having leaves. Lately I have performanceproblemes, as soon as I want to create new branches on deeper levels.
At this time a branch has a foreign key to its parent. The domainobject has access to its parent by calling getParent(), which returns the parent-branch. The deeper the level, the longer it takes to create a new branch.
Microbenchmark results for creating a new branch are like:
Level 1: 32 ms.
Level 3: 80 ms.
Level 9: 232 ms.
Obviously the level (which means the number of parents) is responsible for this. So I wanted to ask, if there are any appendages to work around this kind of problem. I don’t understand why Hibernate needs to know about the whole object tree (all parents until the root) while creating a new branch. But as far as I know this can be the only reason for the delay while creating a new branch, because a branch doesn’t have any other relations to any other objects.
I would be very thankful for any workarounds or suggestions.
greets,
ymene
Basically you are having some sort of many to one relationships structure right?
In hibernate all depends on mapping. Tweak your mapping, Use One-to-many relationship from parent to child using java.util.Set.
Do not use ArrayList becasue List is ordered, so hibernate will add extra column for that ordering only.
Also check your lazy property. If you load parent and you have set lazy="false" on its child set property, then all of its children will be loaded from DB which can affect the performance.
Also check 'inverse' property for children. If inverse is true in child table, that means you can manage the child entity separately. Otherwise you have to do that using the parent only.
google around for inverse, it will sure help you.
thank.
I don't know how Hibernate handles this internally. However, there are different ways to store tree structures in a database. One which is quite efficient for many queries done on the tree is using a "nested set" approach - but this would basically yield the performance issues that you're seeing (e.g. expensive insertion). If you need fast insertion or removal I'd go with what you have, e.g. a simple parent-ID, and try to see what Hibernate is doing all this time.
If you don't need to report on your data in SQL, you could just serialize your JTable to the database instead (perhaps using something like XStream). That way you wouldn't have to worry about expensive database queries that deal with trees.
One thing you can do is use the XML support in MySQL. This will give you native ability to support hierarchies. I've never used XML support in MySQL, so I don't know if it is as full-featured as other DBMSes (SQL Server and DB2 I know have great support, probably Oracle too I would guess).
Note, that I have never used hibernate, so I don't know if you could interface with that, or if you would have to write your own DB code in this case (my guess is, you're going to be writing your own queries).