SQL Joins vs Java code? - java

I have a query like this
Select Folder.name from FROM FolderTable,ValidFolder, ValidFolderGroup, ValidUser,
ValidLocation, ValidDepartment where ValidUser.LocationCode *= ValidLocation.LocationCode
and ValidUser.DepartmentCode *= ValidDepartment.DepartmentCode and Folder.IssueUser =
ValidUser.UserId and ValidFolder.FolderType = Folder.FolderType and
ValidFolderGroup.FolderGroupCode = ValidFolder.FolderGroupCode and
ValidFolderGroup.GroupTypeCode = 13 and (ValidUser.UserId='User' OR
ValidUser.ManagerId='User') and ValidFolderGroup.GroupTypeCode = 13 and
Folder.IssueUser = 'User'
Now here all the table which start with Valid are cache table so these table already contains data .
Suppose if someone using JOOQ or Hibernate which one will be the best option
Use query as written above with all Joins?
Or Use Java code to fulfill the requirement rather than join because as user using Hibernate or JOOQ it already have Java class for the table and Valid table have already all the data ?

Okay, you're probably not going to like this answer, but the best way to do this is not to keep Valid "cached".
The best solution in my opinion would be to use jOOQ (if you prefer DSL) or Hibernate (if you prefer OR mapping) and query the Database every time, and consistently use the DAO pattern.
The jOOQ and Hibernate guys are almost certainly better at SQL than you are. We've used jOOQ and Hibernate in really large enterprise projects, and they both perform exceptionally. Particularly with a good connection pool like BoneCP. If after you've got that setup running, and running well, but still think you may have performance issues, you can always add a cache (like EhCache) afterwards.
Ultimately tho', I'm making a lot of assumptions about your software, namely that
There are more people than you working on it, and
It has to be maintained. If neither of these assumptions are true, then you can safely disregard this answer.

General answer:
Modern databases are incredibly good at optimising your query and choosing the best possible execution plan for you. Given your outer join notation using *=, you're obviously using SQL Server, so that's a pretty good database.
Even if you already have much of the "Valid" data in your application memory, chances are that your database also already has the same data in a buffer cache and thus the database doesn't need to hit the disk again for the various joins in your query.
In fact, depending on the nature of your data, the database might even assess that some of your joins are unneeded (if you have the right meta data, like constraints).
Specific answer:
In your particular case, it looks as though you can indeed strip most of your query yourself and query only the Folder table using search criteria from your application's "Valid" cache. I'm saying that it looks like it, because I don't fully understand the business logic behind those joins and whether they're all modelling 1:1 relationships, or whether removing them will change the semantics of the query.
So, technically, it's possible that you can remove the joins, but if you want to stay on the safe side, just keep things as they are as you migrate to jOOQ or Hibernate.
Alternative 3:
Of course, instead of tampering with this query, you might even be able to remove this query and fetch the Folder.name property already in your previous queries when you load the "Valid" content into memory.

Ever heard of views? Look into them, you'll be amazed.
Apart from that, it's impossible to say what you should do, there's no "best" and you provide way too little information to even make an educated guess about your specific requirements.
But, I'd not hard code things like database IDs in a query that ends up inside any program, far too prone to cause problems in the (near) future.

Related

How to optimize one big insert with hibernate

For my website, I'm creating a book database. I have a catalog, with a root node, each node have subnodes, each subnode has documents, each document has versions, and each version is made of several paragraphs.
In order to create this database the fastest possible, I'm first creating the entire tree model, in memory, and then I call session.save(rootNode)
This single save will populate my entire database (at the end when I'm doing a mysqldump on the database it weights 1Go)
The save coasts a lot (more than an hour), and since the database grows with new books and new versions of existing books, it coasts more and more. I would like to optimize this save.
I've tried to increase the batch_size. But it changes nothing since it's a unique save. When I mysqldump a script, and I insert it back into mysql, the operation coast 2 minutes or less.
And when I'm doing a "htop" on the ubuntu machine, I can see the mysql is only using 2 or 3 % CPU. Which means that it's hibernate who's slow.
If someone could give me possible techniques that I could try, or possible leads, it would be great... I already know some of the reasons, why it takes time. If someone wants to discuss it with me, thanks for his help.
Here are some of my problems (I think): For exemple, I have self assigned ids for most of my entities. Because of that, hibernate is checking each time if the line exists before it saves it. I don't need this because, the batch I'm executing, is executed only one, when I create the databse from scratch. The best would be to tell hibernate to ignore the primaryKey rules (like mysqldump does) and reenabeling the key checking once the database has been created. It's just a one shot batch, to initialize my database.
Second problem would be again about the foreign keys. Hibernate inserts lines with null values, then, makes an update in order to make foreign keys work.
About using another technology : I would like to make this batch work with hibernate because after, all my website is working very well with hibernate, and if it's hibernate who creates the databse, I'm sure the naming rules, and every foreign keys will be well created.
Finally, it's a readonly database. (I have a user database, which is using innodb, where I do updates, and insert while my website is running, but the document database is readonly and mYisam)
Here is a exemple of what I'm doing
TreeNode rootNode = new TreeNode();
recursiveLoadSubNodes(rootNode); // This method creates my big tree, in memory only.
hibernateSession.beginTrasaction();
hibernateSession.save(rootNode); // during more than an hour, it saves 1Go of datas : hundreads of sub treeNodes, thousands of documents, tens of thousands paragraphs.
hibernateSession.getTransaction().commit();
It's a little hard to guess what could be the problem here but I could think of 3 things:
Increasing batch_size only might not help because - depending on your model - inserts might be interleaved (i.e. A B A B ...). You can allow Hibernate to reorder inserts and updates so that they can be batched (i.e. A A ... B B ...).Depending on your model this might not work because the inserts might not be batchable. The necessary properties would be hibernate.order_inserts and hibernate.order_updates and a blog post that describes the situation can be found here: https://vladmihalcea.com/how-to-batch-insert-and-update-statements-with-hibernate/
If the entities don't already exist (which seems to be the case) then the problem might be the first level cache. This cache will cause Hibernate to get slower and slower because each time it wants to flush changes it will check all entries in the cache by iterating over them and calling equals() (or something similar). As you can see that will take longer with each new entity that's created.To Fix that you could either try to disable the first level cache (I'd have to look up whether that's possible for write operations and how this is done - or you do that :) ) or try to keep the cache small, e.g. by inserting the books yourself and evicting each book from the first level cache after the insert (you could also go deeper and do that on the document or paragraph level).
It might not actually be Hibernate (or at least not alone) but your DB as well. Note that restoring dumps often removes/disables constraint checks and indices along with other optimizations so comparing that with Hibernate isn't that useful. What you'd need to do is create a bunch of insert statements and then just execute those - ideally via a JDBC batch - on an empty database but with all constraints and indices enabled. That would provide a more accurate benchmark.
Assuming that comparison shows that the plain SQL insert isn't that much faster then you could decide to either keep what you have so far or refactor your batch insert to temporarily disable (or remove and re-create) constraints and indices.
Alternatively you could try not to use Hibernate at all or change your model - if that's possible given your requirements which I don't know. That means you could try to generate and execute the SQL queries yourself, use a NoSQL database or NoSQL storage in a SQL database that supports it - like Postgres.
We're doing something similar, i.e. we have Hibernate entities that contain some complex data which is stored in a JSONB column. Hibernate can read and write that column via a custom usertype but it can't filter (Postgres would support that but we didn't manage to enable the necessary syntax in Hibernate).

Flexible search in database

I have a legacy system that allows users to manage some entities called "TRANSACTION" in the (MySQL) DB, and mapped to Transaction class in Java. Transaction objects have about 30 fields, some of them are columns in the DB, some of them are joins to another tables, like CUSTOMER, PRODUCT, COMPANY and stuff like that.
Users have access to a "Search" screen, where they are allowed to search using a TransactionId and a couple of extra fields, but they want more flexibility. Basically, they want to be able to search using any field in TRANSACTION or any linked table.
I don't know how to make the search both flexible and quick. Is there any way?. I don't think that having an index for every combination of columns is a valid solution, but full table scans are also not valid... is there any reasonable design? I'm using Criteria to build the queries, but this is not the problem.
Also, I think mysql is not using the right indexes, since when I make hibernate log the sql command, I can almost always improve the response time by forcing an index... I'm starting to use something like this trick adapted to Criteria to force a specific index use, but I'm not proud of the "if" chain. I'm getting something like
if(queryDto.getFirstName() != null){
//force index "IDX_TX_BY_FIRSTNAME"
}else if(queryDto.getProduct() != null){
//force index "IDX_TX_BY_PRODUCT"
}
and it feels horrible
Sorry if the question is "too open", I think this is a typical problem, but I can't find a good approach
Hibernate is very good for writing while SQL still excels on reading data. JOOQ might be a better alternative in your case, and since you're using MySQL it's free of charge anyway.
JOOQ is like Criteria on steroids, and you can build more complex queries using the exact syntax you'd use for native querying. You have type-safety and all features your current DB has to offer.
As for indexes, you need can't simply use any field combination. It's better to index the most used ones and try using compound indexes that cover as many use cases as possible. Sometimes the query executor will not use an index because it's faster otherwise, so it's not always a good idea to force the index. What works on your test environment might not stand still for the production system.

Relational databases - to delete or not to delete? [duplicate]

This question already has answers here:
Never delete entries? Good idea? Usual?
(10 answers)
Closed 9 years ago.
Iv'e just heard from a colleague that deleting rows on a relational DB is pretty dangerous (regarding indexing and cascading actions)
He said that one solution for allowing deletions is to have a "deprecated" field for each entity and instead set the field to true in order to mark the row as "deleted".
of course that will require you on all your queries to fetch all the "dedicated" == false (which is pretty cumbersome)
My questions are:
Is he right? if so - what exactly is dangerous about deleting exactly?
Does his solution is a good practice?
Any alternatives to this solution are available?
thanks.
This question has multiple layers. In general it is a good idea to mark rows as deleted instead of actually deleting them.
There are a few major benefits:
The data is recoverable. You can provide an undelete to users.
The update is faster than the delete.
In a publicly facing app none of the publicly interactable code has a true delete, making it much more difficult to use that code for inappropriate purposes (sql injection, etc.)
If you ever want to report in your data you can.
There are of course caveats and best practices:
This does not apply to lookup tables with easy to recreate data.
You need to consider culling. In our databases we cull deleted records into archival reporting tables. This keeps the primary tables fast, but allows us to report on data related to "deleted" items.
Your culling performance impact (at largish scale) will be similar to a backup and have similar considerations. Run them off hours if you want to archive them all at once, or periodically via cron if you want to just take X number per hour.
NEVER use the deleted data in your live data. In other words it is not a status flag! It is gone. I've made this mistake before and undoing it was painful.
If there is a very high percentage of deletes in a table ask yourself if keeping the data is actually important. You might adjust your culling process to not archive and to instead just run the actual delete.
This approach will last for a really really long time unless your dataset is massive and deletions are massive. Some architecture astronaut will ask you about what is going to happen when you archive 1 billion rows.... when you get to that point you are either hugely successful and can find another way, or you've screwed something else up so completely your archive tasks won't matter any more relative to the other issues you have.
If you have your schema well structured and use transactions where needed, deletions are perfectly safe and using deletion you will get far better performance than the approach you friend suggests.
Inserting a new element may get a tricky as deleting one. I wonder what hacky approach would your friend suggest to overcome that.
CRUD operations have been here for a long while now and creators of relational databases have done pretty good job in optimizing them. Any attempt to outsmart decades of gradual improvement with such hack will most probably fail.
Applying the solution your friend suggests may result in having a huge database with only a small fraction of non-deleted elements. This way your queries will become slower too.
Now having said all that I would like to support a little bit the other side. There are cases when the solution your friend suggests may be the only option. You can't change your schema everytime some query turns out to be slow. Also as others suggest in their answers if you use the "mark as deleted" approach deleted data will be recoverable(which may or may not be good again mentioned in other answers).
Dangerous? Will the server or data center blow up?
I think your colleague is indulging in some hyperbole.
You need not cascade updates or deletes if you don't wish to, but it can be easier than having to clean up manually. It's a choice that you make when you create your schema.
Marking rows as deleted using a flag is another way to go, but it's just another choice. You'll have to work harder to find all the bad rows and run a batch job to remove them.
if you have retention requirements, it's more typical to partition the schema and move older records off into a warehouse for historical analysis and reporting. In that case you wouldn't be deleting anything, just moving them out after a set period of time.
Yes, he is right. Databases (indexes, specifically) are optimized for insertion and deletion can be painfully slow. Even setting an indexed field to null can cause the same trouble. I see cascading as a lesser issue because the db should never be configured to do dangerous cascading automatically.
Yes, flagging a record as "inactive", "deleted", "deprecated" (your choice) is standard and preferred practice to resolve a deletion-related performance issue.
But, to qualify the above, it only applies to transactional (as opposed to archival) tables, and then only to those specific tables which contain a huge number of rows (millions and more). Do not ham-handedly apply a "best practice" across the board.
Another approach is to simply not have a transactional table with millions of rows. Move the data to an archival table before it grows to such proportions.
The problem with DELETE's in relational databases is that they are unrevertable. You delete data and it's gone. There is no way to restore it (except rollback to an earlier backup, of course). Combined with the SQL syntax, which is based on the principle "take everything I don't explicitely exclude" this can easily lead to unintentional loss of data due to user error or bugs.
Just marking data as deleted but not actually deleting it has the advantage that deleted data can be easily restored. But keep in mind that the marked-as-deleted pattern also has disadvantages:
As you said, programming gets a bit more complicated, because you have to remember that every SELECT must now include a WHERE deleted = false.
When you frequently delete data, your database will accumulate a lot of cruft. This will cause it to grow which impacts performance and uses unnecessary drive space.
When your users are forced to delete data due to privacy regulations and they assume that pressing the "delete" button really deletes it, this practice might inadvertedly cause them to violate these regulations.

Avoiding N+One selects and Invalid results from eclipselink with batch read

I'm trying to cut down the number of n+1 selects incurred by my application, the application uses EclipseLink as an ORM and in as many places as possible I've tried to add the batch read hint to queries. In a large number of places in the app I don't always know exactly what relationships I'll be traversing (My view displays fields based on user preferences). At that point I'd like to run one query to populate all of those relationships for my objects.
My dream is to call something like ReadAllRelationshipsQuery(Collection,RelationshipName) and populate all of these items so that later calls to:
Collection.get(0).getMyStuff will already be populated and not cause a db query. How can I accomplish this? I'm willing to write any code I need to but I can't find a way that work with the eclipselink framework?
Why don't I just batch read all of the possible fields and let them load lazily? What I've found is that the batch value holders that implement batch reads don't behave well with the eclipselink cache. If a batch read value holder isn't "evaluated" and ends up in the eclipse link cache it can become stale and return incorrect data (This behavior was logged as an eclipselink bug but rejected...)
edit: I found the link to the bug here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=326197
How do I avoid N+1 selects for objects I already have a reference to?
You have three basic ways to load data into objects from a JPA-based solution. These are:
Load dynamically by object traversal (e.g. myObject.getMyCollection().get()).
Load graphs of objects by prefetching dynamically using JPA QL (e.g. FETCH JOINs as described at the Oracle JPA tutorial )
Load by setting the fetch mode ( Is there a way to change the JPA fetch type on a method? )
Each of these has pros and cons.
Loading dynamically by object transversal will generate more (highly targeted queries). These queries are usually small (not large SQL statements, but may load lots of data) and tend to play nicely with a second level cache, but you can get lots and lots of little queries.
Prefetching with JPA QL will give you exactly what you want, but that assumes that you know what you want.
Setting the fetch mode to EAGER will load lots and lots of data for you automatically, but depending on the configuration and usage this may not actually help much (or may make things a lot worse) as you may wind up dragging a LOT of data from the DB into your app that you didn't expect.
Regardless, I highly recommend using p6spy ( http://sourceforge.net/projects/p6spy/ ) in conjunction with any JPA-based application to understand the effects of your tuning.
Unfortunately, JPA makes some things easy and some things hard - mainly, side-effects of your usage. For example, you might fix one problem by setting the fetch mode to eager, and then create another problem where the eager fetch pulls in too much data. EclipseLink does provide tooling to help sort this out ( EclipseLink Performance Tools )
In theory, if you wanted to you could write a generic JavaBean property walker by using something like Apache BeanUtils. Usually just calling a method like size() on a collection is enough to force it to load (although using a collection batch fetch size might complicate things a bit).
One thing to pay particular attention to is the scope of your session and your use of caches (EclipseLink cache).
Something not clear from your post is the scope of a session. Is a session a one shot affair (e.g. like a web page request) or is it a long running thing (e.g. like a classic client/server GUI app)?
It is very difficult to optimize the retrieval of relationships if you do not know what relationships you require.
If you application is requesting what relationships it wants, then you must know at some level which relationships you require, and should be able to optimize these in your query for the objects.
For an overview of relationship optimization techniques see,
http://java-persistence-performance.blogspot.com/2010/08/batch-fetching-optimizing-object-graph.html
For Batch Fetching, there are three types, JOIN, EXISTS, and IN. The problem you outlined of changes to data affecting the original query for cache batched relationships only applies to JOIN and EXISTS, and only when you have a selection criteria based on updateale fields, (if the query you are optimizing is on id, or all instances you are ok). IN batch fetching does not have this issue, so you can use IN batch fetching for all the relationships and not have this issue.
ReadAllRelationshipsQuery(Collection,RelationshipName)
How about,
Query query = em.createQuery("Select o from MyObject o where o.id in :ids");
query.setParameter(ids, ids);
query.setHint("eclipselink.batch", relationship);
If you know all possible relations and the user preferences, why don't you just dynamically build the JPQL string (or Criteria) before executing it?
Like:
String sql = "SELECT u FROM User u"; //use a StringBuilder, this is just for simplity's sake
if(loadAdress)
{
sql += " LEFT OUTER JOIN u.address as a"; //fetch join and left outer join have the same result in many cases, except that with left outer join you could load associations of address as well
}
...
Edit: Since the result would be a cross product, you should then iterate over the entities and remove duplicates.
In the query, use FETCH JOIN to prefetch relationships.
Keep in mind that the resulting rows will be the cross product of all rows selected, which can easily be more work than the N+1 queries.

Verfying a database is as you expect it it be

I've been writing a java app on my machine and it works perfectly using the DB I set up, but when I install it on site it blows up because the DB is slightly different.
So I'm in the process of writing some code to verify that:
A: I've got the DB details correct
B: The database has all the Tables I expect and they have the right columns.
I've got A down but I've got no idea where to start with B, any suggestions?
Target DB is for the current client is Oracle, but the app can be configured to run on SQL Server as well. So a generic solution would be appreciated, but is not nessisary as I'm sure I can figure out how to do one from the other.
You'll want to query the information_schema of the database, here are some examples for Oracle, every platform I am aware of has something similar.
http://www.alberton.info/oracle_meta_info.html
You might be able to use a database migration tool like LiquiBase for this -- most of these tools have some way of checking the database. I don't have first hand experience using it so it's a guess.
I use DbUnit to test databases. It is a Java based solution, that integrates well with Junit. It is possible to use it with almost no Java. I havent used it in exactly the same situation as you described, but it should be close enough to work.
Most generic solution would be to execute queries with select clause having the expected coulmns and from clause having table names, within try catch block. You can put where clause as 1=2 so as not to fetch any data. If query executed without throwing exception then you have got the expected table and columns.
The slightly different piece might be better handled by scripting the creation of the database in the first place. A automated process gives you a better chance of making the two identical.
Another point worth making is that you minimize your risk by making your devl and prod environments identical - same database schema and vendor for both. Change the circumstances that make the two different.
Lastly, you don't say what is "slightly" different, but sometimes these are unavoidable (e.g. Oracle uses sequences, SQL Server uses identities). Maybe Hibernate can help you to switch between vendors more reliably. It abstracts details in such a way that changing databases can mean modifying a single value in a configuration file.
What you need to have is basically Unit Tests for your database. "A column must exist named FOOBAR, the type must be Integer. No foreign keys may exist etc."
This is doable with plain JUnit and JDBC (ask the table for its meta-data) as you may want to ensure that you are absolutely certain what is being done which may be harder when using e.g. dbUnit.
You can check for the presence of tables, columns, views, etc. using these tables in Oracle
USER_TABLES
USER_VIEWS
USER_PROCEDURE
(or for everything)
USER_OBJECTS WHERE OBJECT_TYPE = '??'
To keep going... USER_TAB_COLS for table columns
Regards
K
I use MigrateDB for this. It lets you build queries that do things like check for the existence of given tables, columns, rows, indexes, etc. for a given database and use those as "tests." If a test fails, it triggers an "action" (which is just another query that knows how to remedy the problem.)
MigrateDB supports multiple database platforms (you can specify the "check for table existence query" for each platform, for example), completely configurable tests (you can make your own up), comes with fairly complete Oracle tests, and can be run in "audit only" mode so that it only tells you what the differences are.
It's a nice, robust solution.
If you're using plain JDBC, you should try utilizing this method: DatabaseMetadata.getTables and other similar methods available in the metadata class.

Categories