Optimizing Hibernate session.createQuery().list(); - java

We have a Users table (MySQL) with 120,000 rows
List<User> users = session.createQuery("from User").list();
This hibernate query takes about 6 to 9 seconds to execute. How can we optimize this? Is MySQL the bottleneck? Or is .list() usually this slow?

Ofcourse it's slow because the query perform the full table scan. You should join other objects associated with it, including where clause of the query, the query could be changed to return the limited number of records or use criteria API and projection.

Use pagination on your query. You should not call all rows at a time. You can set first position of result and maximum result. For example, if you want to read first 100 result than change your query like:
Query q=session.createQuery("from User");
q.setFirstResult(fistRes);// this will variable
q.setMaxResults(maxRes);// this will also be variable as parameter.

Related

Performance issues when calling MySQL stored procedure using Hibernate

I'm trying to understand why the execution time of my stored procedure is so much higher when I run it from Java using Hibernate than when I run it directly in MySQL.
The stored procedure itself is responsible for moving 20000 rows from table A to table B and then delete them in table A.
Running the stored procedure in MySQL takes around 18 seconds.
In Java, I'm using Hibernate and create a query:
Query query =
mainSession
.createSQLQuery("{CALL my_stored_procedure(:maxResultSize)}")
.setParameter("maxResultSize", maxResultSize);
Then the query is executed and the session is flushed and cleared:
List<BigInteger> rows = query.list();
mainSession.flush();
mainSession.clear();
This takes around 248 seconds.
Does anyone know why it takes so much more time to call the stored procedure from Java using Hibernate?
What approach should I take to increase the performance?
Please could you try with native query, it is faster for me and work well.
List<Object[]> query = (List<Object[]>) mySessionFactory.getCurrentSession()
.createNativeQuery("{CALL my_stored_procedure(:maxResultSize)}")
.setParameter("maxResultSize", maxResultSize).getResultList();

Java Hibernate tips about update all table fields performance

I have a requirement like this.
protected Integer[] updateFullTable(final Class clazz){
final ProjectionList projectionList=Projections.projectionList().add(Projections.property("id"),"id");
final Criteria criteria=session.createCriteria(clazz)
.add(Restrictions.eq("typeOfOperation",1))
.add(Restrictions.eq("performUpdate",true));
criteria.setProjection(projectionList);
final List idsList=criteria.list();
final Integer[]ids = transformObjectArrayIntoIntegerArray(idList);
//NOW WE UPDATE THE ROWS IDS.
final Query query=session.createQuery("update "+clazz.getName()+" set activeRegister=true and updateTime=:updateTime where id in (:ids)")
.setParameter("updateTime",new Date())
.setParameterList("ids",ids);
query.executeUpdate();
return transform;
}
As you guys can see I need to update all rows in a table sometime I query all the rows ids and later apply the update to those ids in a separate query but the tables has a lot of records sometimes takes between 30 seconds to 10 minutes depends of the table.
I have change this code to only one update like this.
final Query query=session.createQuery("update "+clazz.getName()+" set activeRegister=true and updateTime=:updateTime where typeOfOperation=1 and performUpdate=true");
And with that only query I avoid the first query but I cannot not longer return the affected Ids. But later the requirement was change a
final StringBuilder logRevert;
Parameter was added.
Which needs to store the updated ids to apply a direct reverse update into the DB if required.
But with my update i cannot get the Ids not longer. My question is how can I get or return the affected ids using a stored procedure or some workaround in the DB or hibernate I mean get the first behaviour with only one query or a enhanced code..
Any tip.
I have tried
Using criteria
Using HQL.
Using namedQuery
Using SqlQuery
Not using transformer returning me a raw Object[]
But the times still are somehow high.
I want something like
query.executeUpdate(); // RETURNS THE COUNT OF THE AFFECTED ROWS
But I need the affected Ids......
Sorry if the question is simple.
UPDATE
With #dmitry-senkovich I could do it using rawSQL but not with hibernate a separated question was made here.
https://stackoverflow.com/questions/44641851/java-hibernate-org-hibernate-exception-sqlgrammarexception-could-not-extract-re
What about the following solution?
SET #ids = NULL;
UPDATE SOME_TABLE
SET activeRegister = true, updateTime = :updateTime
WHERE typeOfOperation = 1 and performUpdate = true
AND (SELECT #ids := CONCAT_WS(',', id, #ids));
SELECT #ids;
if updateTime is datetime
you can select all affected record ids with select
Date updateTime = new Date(); // time from update
select id from clazz.getName() where updateTime=:updateTime and activeRegister=true and typeOfOperation=1 and performUpdate=true
Updating a large number of rows in a table is a slow operation. This is due to needing to capture the 'old' value of each row in case of a ROLLBACK (due to explicit ROLLBACK, failure of the UPDATE, failure or subsequent query in same transaction, or power failure before UPDATE finishes).
The usual fix is to rethink the application design that necessitated the large UPDATE.
On there other hand, there is a possible fix to the schema. Please provide SHOW CREATE TABLE so I don't have to do as much 'hand waving' in the following paragraph...
It might be better to move the column(s) that need to be updated into a separate, parallel, table ("vertical partitioning"). This might be beneficial if
The original table has lots of wide columns (TEXT, BLOB, etc) -- by not having to make bulky copies.
The original table is being updated simultaneously -- by the updates not blocking each other.
There are SELECT hitting the non-updated columns -- by avoiding certain other blockings.
You can still get the original set of columns -- by JOINing the two tables together.

Hibernate using sql call for each row to fetch relationship - instead of aggregating to one IN clause

I have a User object with an addresses set to Address. Now let's say I need to fetch 1,000,000 users and display their address in some report.
The Hibernate way to do it is to create one sql call to the User table, and then another call to the Address table for each user. The result is a grand total of 1,000,001 calls and a long query time.
On the other hand if you aggregate all the foreign keys (for example User_Id) and run an IN sql call
FROM Address where User_Id IN (,,,,,,,,)
you reduce the number of calls to 2 - one to the User table and one to the Address table, to bring all the 1,000,000 required address in one call.
But this requires some work on the app side. Not a lot of work, just a for loop, but still. Is it possible to ask Hibernate to do it the efficient way?
Please note that LAZY fetching has nothing to do with it. For my use case I need an EAGER fetching.
Hibernate will generate single query using JOINS. I dont know what sort of configuration you are using whatsoever.SELECT u FROM User u LEFT JOIN FETCH u.address would give u address via joins. Single query
I would suggest you make 2 queries. I use Java 8 stream.
Query query = session.createQuery("from User").setMaxResult(BATCH);
List<User> users = query.list();
final List<Integer> userIds = users.stream()
.map(u -> u.getUserId()).collect(Collectors.toList());
query = session.createQuery("FROM Address where User_Id IN (:ids)").setListParametrs(userIds);
final List<Address> result = query.list();
Also I would like to suggest do not get 1,000,000 rows in 1 query, use batch processing.

When will setFetchSize() and setMaxResults() actually filter the result set with Oracle 11g?

If I have a table called ACCOUNTS that has one million records and I issue the following Criteria query, when does the filtering of the number of records returned take place? I'm interested in whether or not the results of the query would differ when .setFetchSize(100) is and is not included in the query. With setFetchSize(100), will Oracle fetch only 100 records then order them?
Criteria criteria = session.createCriteria(Accounts.class)
.setFetchSize(100)
.setMaxResults(100)
.addOrder(Order.desc("acct_id"));
I believe that number should be interpreted as the limit that gets built into the sql query itself (e.g. WHERE ROWNUM <=100). You can make sure of that by enabling Hibernate SQL logging and inspecting the given query.

Hibernate limit result inquiry

How does maxresult property of hibernate query works? in the example below :
Query query = session.createQuery("from MyTable");
query.setMaxResults(10);
Does this get all rows from database, but only 10 of them are displayed? or this is same as limit in sql.
It's the same as LIMIT, but it is database-independent. For example MS SQL Server does not have LIMIT, so hibernate takes care of translating this. For MySQL it appends LIMIT 10 to the query.
So, always use query.setMaxResults(..) and query.setFirstResult(..) instead of native sql clauses.
SetMaxResults retrieves all the rows and displays the number of rows that are set. In my case I didn't manage to find a method that can really retrieve only a limit number of rows.
My recommendation is to create the query yourself and set there "rownum" or "limit".

Categories