Change of design of queries to improve performance - java

This is more like a design question but related to SQL optimization as well.
My project has to import a large number of records into the database (more than 100k records). In the meantime, the project has logic to check each record to make sure it meets the criteria which are configurable. It then will mark the record as no warning or has warning in the database. The inserting and warning checking are done within one importing process.
For each criteria it has to query the database. The query needs to join two other tables and sometimes add additional nested query inside the conditions, such as
select * from TableA a
join TableB on ...
join TableC on ...
where
(select count(*) from TableA
where TableA.Field = Bla) > 100
Although the queries take unnoticeable time, to query the entire record set takes a considerable amount of time which may be 4 - 5 hours on a server. Especially if there are many criteria, at the end the project will stop running the import and rollback.
I've tried changing "SELECT * FROM" to "SELECT TableA.ID FROM" but it seems it has no effect at all. Is there a better design to improve the performance of this process?

How about making a temp table (or more than one) that stores the aggregated results of the sub-queries, then indexing that/those with covering indexes.
From your code above, we'd make a temp table grouping on TableA.Field1 and including a count, then index on Field1, theCount. On SQL server the fastest approach would then be:
select * from TableA a
join TableB on ...
join TableC on ...
join (select Field1 from #temp1 where theCount > 100) t on...
The reason this works is that we are doing the same trick twice.
First, we pre-aggregate into the temp table, which is a simple operation and very easy for SQL Server to optimize. So we have taken a piece of the problem and solved in an optimizable way.
Then we repeat this trick by joining to a subquery, putting the filter inside the subquery, so that the join acts as a filter.

I would suggest you batch your records together (500 or so at a time) and send it to a stored proc which can do the calculation.
Use simple statements instead of joins in there. That saves as well. This link might help as well.

Good choice is using indexed view.
http://msdn.microsoft.com/en-us/library/dd171921(SQL.100).aspx

Related

Spring JPA - Reading data along with relations - performance improvement

I am reading data from a table using Spring JPA.
This Entity object has one-to-many relationship to other six tables.
All tables together has 20,000 records in them.
I am using below query to fetch data from DB.
SELECT * FROM A WHER ID IN (SELECT ID FROM B WHERE COL1 = '?')
A table has relationship to other 6 tables.
Spring JPA is taking around 30 seconds of time to read this data from DB.
Any idea to improve the data fetch time here.
I am using native Queries here and i am looking for query rewriting that will optimize the data fetch time.
Please suggest thanks.
You might need consider below to identify the root cause:
Check if you are ending up with n+1 query issue. Your query might end up calling n queries for each join table, where n is no. of associations with the join table. You can check this by setting spring.jpa.show-sql=true
If you see the issue as n+1 then you need set appropriate FetchMode, refer https://www.baeldung.com/hibernate-fetchmode for detailed explanation of using different FetchModes.
If it is not n+1 query issue you might need to check the performance of the genarated queries using EXPLAIN command. Usually IN clause on a non indexed columns have performance impact.
So set spring.jpa.show-sql=true and check queries generated and run to debug and optimize your code or query.

How to update the first N rows with JPA and Hibernate

I want only update first N row, in SQL:
UPDATE Table1 SET c1 = 'XXX' WHERE Id IN (SELECT TOP 10 Id FROM Table1 ORDER BY c2)
Can Hibernate do that in ONE update?
With Hibernate, you can always issue a native query as such, but the current running Persistence Context won't be aware of the deleted entries.
As long as you only deleted a relatively small amount of entries, you can simply select N entities and then use the remove operation so that you can benefit from optimistic locking checks and prevent lost updates.
If you want to deletes lots of entries, then a bulk delete query is much more appropriate. You can even run the SQL DELETE query that you mentioned. That's exactly the reason why JPA and Hibernate allow you to use native SQL queries anyway.

Hibernate performance issue: query execution extremely slow

The following hibernate query is being used to fetch a list of ProductCatalogue records, by passing in catId and inventoryId
select prodcat from ProductCatalogue prodcat where prodcat.prodSec.prodId=:catId and prodcat.prodPlacedOrder.inventoryId=:inventoryId
The tables ProductCatalogue and ProdPlacedOrder are tables with 3 lakh + records. inventoryId is a column in prodOrder table, and prodPlacedOrder extends prodOrder table.
This query on execution takes a lot of time, and the single hibernate query shoots off many complex sql queries.
Any suggestions on what might be the issue and how to modify it such that the query is executed faster?
Difficult to say without more info but try making ProdPlacedOrder as LAZY fetch if you dont need any data from that table.
Also as phatmanace, mentioned - check your indices.

Is it faster to programmatic join tables or use SQL Join statements when one table is much smaller?

Is it faster to programmatically join tables or use SQL Join statements when one table is much smaller?
More specifically, how does grabbing a string from a hashmap<int, string> of the smaller table and setting its value to the objects returned from the larger table compare to pre-joining the tables on the database? Does the relative sizes of the two tables make a difference?
Update: To rephrase my question. Does grabbing the subset of the larger table (the 5,000 - 20,000 records I care about) and then programmatically joining the smaller table (which I would cache locally) out perform an SQL join? Does the SQL join apply to the whole table or just the subset of the larger table that would be returned?
SQL Join Statement:
SELECT id, description
FROM values v, descriptions d
WHERE v.descID=d.descID
AND v.something = thingICareAbout;
Individual Statements:
SELECT id, descID
FROM values
WHERE v.something = thingICareAbout;
SELECT descID, description
FROM descriptions d;
Programmatic join:
for (value : values){
value.setDescription(descriptions.get(value.getDescID))
}
Additional Info: There are a total of 800,000,000 records in the larger table that corresponding to 3,000 values in the smaller table. Most searches return between 5,000 - 20,000 results. This is an oracle DB.
Don't even think it. The database can do things locally at least as fast as you can, and without having to ship all the data over the network.
In general, joining tables like this is the sort of operation that SQL databases are optimized for, so there is a good chance that they're fairly hard to beat on this sort of operation.
The relative size of the two tables might make a difference if you attempt to do the join "manually" as you have to factor in the additional memory consumption to hold the bigger table data in memory while you're doing your processing.
While this example is pretty easy to get right, by doing the join yourself you also lose a built-in data integrity check that the database would give you if you let it do the join.
Probably SQL would do the work faster. From my understanding, if you do it in your program then it would have to load the 800,000,000 records from the database into memory for your application, then the 3,000 for the small table, then match each record, discard almost all of them (you're only expecting a couple of thousand results) and display to the user.
If you put indexes on the right columns in oracle (descID in both tables) then it would be able to find the joining records very quickly and just load up the 5,000-20,000 that you're expecting.
That said though the easiest way to find out is to test both out and take numbers!
If you do the join in memory, you will need to download 800,000,000 + 3,000 records. If you do the join in the database you will need to download 5,000 - 20,000 results each time. Which sounds faster to you? Hint: If you do 100,000 searches the first option might be faster.

What method of joining entities is faster?

Here are two queries retrieving equivalent data:
SELECT DISTINCT packStatus
FROM PackStatus packStatus JOIN FETCH packStatus.vars, ProcessEntity requestingProcess
WHERE
packStatus.status.code='OGVrquestExec'
AND packStatus.procId=requestingProcess.activityRequestersProcessId
AND requestingProcess.id='1000323733_GU_OGVProc'
SELECT DISTINCT packStatus
FROM PackStatus packStatus JOIN FETCH packStatus.vars
WHERE
packStatus.status.code='OGVrquestExec'
AND packStatus.procId=(SELECT requestingProcess.activityRequestersProcessId FROM ProcessEntity requestingProcess WHERE requestingProcess.id='1000323733_GU_OGVProc')
These queries differ in the method how requstingProcess is joined with packStatus. In general, which of these two methods is more preferable in terms of performance? I'm using JPA 1.2 provided by Hibernate 3.3 on Postgres 8.4.
UPD: I've replaced fake queries with real queries from my app. Here is SQL generated by Hibernate for first and second query. Links to query plans: first, second. Query plans look pretty the same. The only difference is what moment data from bpms_process table is aggregated to query result at. But I don't know is it right to generalize these results? Would query plans be almost the same for queries differing only in joining method? Is it possible to get a big difference in query cost by changing joining method?
Use EXPLAIN ANALYZE and see.
I won't be surprised if they get turned into the same query plan.
See: https://stackoverflow.com/tags/postgresql-performance/info

Categories