Sorry in advance if someone has already answered this specific question but I have yet to find an answer to my problem so here goes.
I am working on an application (no I cannot give the code as it is for a job so I'm sorry about that one) which uses DAO's and Hibernate and POJO's and all that stuff for communicating and writing to the database. This works well for the application assuming I don't have a ton of data to check when I call Session.flush(). That being said, there is a page where a user can add any number of items to a product and there is one particular case where there are something along the lines of 25 items. Each item has about 8 fields a piece that are all stored in the database. When I call the flush it does save everything to the database but it takes FOREVER to complete. The three lines I am calling are:
merge(myObject);
Session.flush();
Session.refresh(myObject);
I have tried a number of different combinations of things to fix this problem and a number of different solutions so coming back and saying "Don't use flus()" isn't much help as the saveOrUpdate() and other hibernate sessions don't seem to work. The only solution I can think of is to scrap the entire project (the code we got was inherited and poorly written to say the least) or tell the user community to suck it up.
It is my understanding from Hibernate API that if you want to write the data to the database it runs a check on every item, if there is a difference it creates a queue of update queries, then runs the queries. It seems as though this data is being updated every time because the "DATE_CREATED" column in my database is different even if the other values are unchanged.
What I was wondering is if there was another way to prevent such a large committing of data or a way of excluding that particular column from the "check" hibernate does so I don't have to commit all 25 items if I only made a change to 1?
Thanks in advance.
Mike
Well, you really cannot avoid the dirty checking in hibernate unless you use a StatelessSession. Of course, you lose a lot of features (lazy-load etc.) with that, but it's up to you to make this decision.
Another option: I would definitely try to use dynamic-update=true in your entity. Like:
#Entity(dynamicUpdate = true)
class MyClass
Using that, Hibernate will update the modified columns only. In small tables, with few columns, it's not so effective, but in your case maybe it can help make the whole process faster as you cannot avoid dirty checking with a regular Hibernate Session. Updating a few columns instead of the whole object is always better, right?
This post talks more about dynamic-update attribute.
What I was wondering is if there was another way to prevent such a
large committing of data or a way of excluding that particular column
from the "check" hibernate does so I don't have to commit all 25 items
if I only made a change to 1?
I would profile the application to ensure that the dirty checking on flush is actually the problem. If you find that this is indeed the case you can use evict to manage the session size.
session.update(myObject);
session.flush();
session.evict(myObject);
Related
I want a page of filtered data from an Oracle database table, but I have a query that might return tens of millions of records, so it's not feasible to pull it all into memory. I need to filter records out in a way that cannot be done via SQL, and return back a page of records. In other words, the pagination part must be done after the filtering.
So, I attempted to use Hibernate's ScrollableResults, thinking it would be a way to pull in only chunks at a time and iterate through them. So, I created it:
ScrollableResults results = query.setReadOnly(true)
.setFetchSize(500)
.setCacheable(false)
.scroll();
... and yet, it appears to pull everything into memory (2.5GB pulled in per query). I've seen another question and I've tried some of the suggestions, but most seem MySQL specific, and I'm using an Oracle 19 driver (e.g. Integer.MIN_VALUE is rejected outright as a fetch size in the Oracle driver).
There was a suggestion to use a stateless session (I'm using the EntityManager which has no stateless option), but my thought is that if we don't fetch many records (because we only want the first page of 200 filtered records), why would Hibernate have millions of records in memory anyway, even though we never scrolled over them?
It's clear to me that I don't understand how/why Hibernate pulls things into memory, or how to get it to stop doing so. Any suggestions on how to prevent it from doing so, given the constraints above?
Some things I'm going to try:
Different scroll modes. Maybe insensitive or forward only prevents Hibernate's need to pull everything in?
Clearing the session after we have our page. I'm closing the session (both using close() in the ScrollableResults and the EntityManager), but maybe an explicit clear() will help?
We were scrolling through the entire ScrollableResults to get the total count. This caused two things:
The Hibernate session cached entities.
The ResultSet in the driver kept rows that it has scrolled past.
Fixing this is specific to my case, really, but I did two things:
As we scroll, periodically clear the Hibernate session. Since we use the EntityManager, I had to do entityManager.unwrap(Session.class).clear(). Not sure if entityManager.clear() would do the job or not.
Make the ScrollableResults forward-only so the Oracle driver doesn't have to keep records in memory as it scrolls. This was as simple as doing .scroll(ScrollMode.FORWARD_ONLY). Only possible since we're only moving forward, though.
This allowed us to maintain a smaller memory footprint, even while scrolling through literally every single record (tens of millions).
Why would you scroll through all results just to get the count? Why not just execute a count query?
For my website, I'm creating a book database. I have a catalog, with a root node, each node have subnodes, each subnode has documents, each document has versions, and each version is made of several paragraphs.
In order to create this database the fastest possible, I'm first creating the entire tree model, in memory, and then I call session.save(rootNode)
This single save will populate my entire database (at the end when I'm doing a mysqldump on the database it weights 1Go)
The save coasts a lot (more than an hour), and since the database grows with new books and new versions of existing books, it coasts more and more. I would like to optimize this save.
I've tried to increase the batch_size. But it changes nothing since it's a unique save. When I mysqldump a script, and I insert it back into mysql, the operation coast 2 minutes or less.
And when I'm doing a "htop" on the ubuntu machine, I can see the mysql is only using 2 or 3 % CPU. Which means that it's hibernate who's slow.
If someone could give me possible techniques that I could try, or possible leads, it would be great... I already know some of the reasons, why it takes time. If someone wants to discuss it with me, thanks for his help.
Here are some of my problems (I think): For exemple, I have self assigned ids for most of my entities. Because of that, hibernate is checking each time if the line exists before it saves it. I don't need this because, the batch I'm executing, is executed only one, when I create the databse from scratch. The best would be to tell hibernate to ignore the primaryKey rules (like mysqldump does) and reenabeling the key checking once the database has been created. It's just a one shot batch, to initialize my database.
Second problem would be again about the foreign keys. Hibernate inserts lines with null values, then, makes an update in order to make foreign keys work.
About using another technology : I would like to make this batch work with hibernate because after, all my website is working very well with hibernate, and if it's hibernate who creates the databse, I'm sure the naming rules, and every foreign keys will be well created.
Finally, it's a readonly database. (I have a user database, which is using innodb, where I do updates, and insert while my website is running, but the document database is readonly and mYisam)
Here is a exemple of what I'm doing
TreeNode rootNode = new TreeNode();
recursiveLoadSubNodes(rootNode); // This method creates my big tree, in memory only.
hibernateSession.beginTrasaction();
hibernateSession.save(rootNode); // during more than an hour, it saves 1Go of datas : hundreads of sub treeNodes, thousands of documents, tens of thousands paragraphs.
hibernateSession.getTransaction().commit();
It's a little hard to guess what could be the problem here but I could think of 3 things:
Increasing batch_size only might not help because - depending on your model - inserts might be interleaved (i.e. A B A B ...). You can allow Hibernate to reorder inserts and updates so that they can be batched (i.e. A A ... B B ...).Depending on your model this might not work because the inserts might not be batchable. The necessary properties would be hibernate.order_inserts and hibernate.order_updates and a blog post that describes the situation can be found here: https://vladmihalcea.com/how-to-batch-insert-and-update-statements-with-hibernate/
If the entities don't already exist (which seems to be the case) then the problem might be the first level cache. This cache will cause Hibernate to get slower and slower because each time it wants to flush changes it will check all entries in the cache by iterating over them and calling equals() (or something similar). As you can see that will take longer with each new entity that's created.To Fix that you could either try to disable the first level cache (I'd have to look up whether that's possible for write operations and how this is done - or you do that :) ) or try to keep the cache small, e.g. by inserting the books yourself and evicting each book from the first level cache after the insert (you could also go deeper and do that on the document or paragraph level).
It might not actually be Hibernate (or at least not alone) but your DB as well. Note that restoring dumps often removes/disables constraint checks and indices along with other optimizations so comparing that with Hibernate isn't that useful. What you'd need to do is create a bunch of insert statements and then just execute those - ideally via a JDBC batch - on an empty database but with all constraints and indices enabled. That would provide a more accurate benchmark.
Assuming that comparison shows that the plain SQL insert isn't that much faster then you could decide to either keep what you have so far or refactor your batch insert to temporarily disable (or remove and re-create) constraints and indices.
Alternatively you could try not to use Hibernate at all or change your model - if that's possible given your requirements which I don't know. That means you could try to generate and execute the SQL queries yourself, use a NoSQL database or NoSQL storage in a SQL database that supports it - like Postgres.
We're doing something similar, i.e. we have Hibernate entities that contain some complex data which is stored in a JSONB column. Hibernate can read and write that column via a custom usertype but it can't filter (Postgres would support that but we didn't manage to enable the necessary syntax in Hibernate).
I have a query like this
Select Folder.name from FROM FolderTable,ValidFolder, ValidFolderGroup, ValidUser,
ValidLocation, ValidDepartment where ValidUser.LocationCode *= ValidLocation.LocationCode
and ValidUser.DepartmentCode *= ValidDepartment.DepartmentCode and Folder.IssueUser =
ValidUser.UserId and ValidFolder.FolderType = Folder.FolderType and
ValidFolderGroup.FolderGroupCode = ValidFolder.FolderGroupCode and
ValidFolderGroup.GroupTypeCode = 13 and (ValidUser.UserId='User' OR
ValidUser.ManagerId='User') and ValidFolderGroup.GroupTypeCode = 13 and
Folder.IssueUser = 'User'
Now here all the table which start with Valid are cache table so these table already contains data .
Suppose if someone using JOOQ or Hibernate which one will be the best option
Use query as written above with all Joins?
Or Use Java code to fulfill the requirement rather than join because as user using Hibernate or JOOQ it already have Java class for the table and Valid table have already all the data ?
Okay, you're probably not going to like this answer, but the best way to do this is not to keep Valid "cached".
The best solution in my opinion would be to use jOOQ (if you prefer DSL) or Hibernate (if you prefer OR mapping) and query the Database every time, and consistently use the DAO pattern.
The jOOQ and Hibernate guys are almost certainly better at SQL than you are. We've used jOOQ and Hibernate in really large enterprise projects, and they both perform exceptionally. Particularly with a good connection pool like BoneCP. If after you've got that setup running, and running well, but still think you may have performance issues, you can always add a cache (like EhCache) afterwards.
Ultimately tho', I'm making a lot of assumptions about your software, namely that
There are more people than you working on it, and
It has to be maintained. If neither of these assumptions are true, then you can safely disregard this answer.
General answer:
Modern databases are incredibly good at optimising your query and choosing the best possible execution plan for you. Given your outer join notation using *=, you're obviously using SQL Server, so that's a pretty good database.
Even if you already have much of the "Valid" data in your application memory, chances are that your database also already has the same data in a buffer cache and thus the database doesn't need to hit the disk again for the various joins in your query.
In fact, depending on the nature of your data, the database might even assess that some of your joins are unneeded (if you have the right meta data, like constraints).
Specific answer:
In your particular case, it looks as though you can indeed strip most of your query yourself and query only the Folder table using search criteria from your application's "Valid" cache. I'm saying that it looks like it, because I don't fully understand the business logic behind those joins and whether they're all modelling 1:1 relationships, or whether removing them will change the semantics of the query.
So, technically, it's possible that you can remove the joins, but if you want to stay on the safe side, just keep things as they are as you migrate to jOOQ or Hibernate.
Alternative 3:
Of course, instead of tampering with this query, you might even be able to remove this query and fetch the Folder.name property already in your previous queries when you load the "Valid" content into memory.
Ever heard of views? Look into them, you'll be amazed.
Apart from that, it's impossible to say what you should do, there's no "best" and you provide way too little information to even make an educated guess about your specific requirements.
But, I'd not hard code things like database IDs in a query that ends up inside any program, far too prone to cause problems in the (near) future.
This question already has answers here:
Never delete entries? Good idea? Usual?
(10 answers)
Closed 9 years ago.
Iv'e just heard from a colleague that deleting rows on a relational DB is pretty dangerous (regarding indexing and cascading actions)
He said that one solution for allowing deletions is to have a "deprecated" field for each entity and instead set the field to true in order to mark the row as "deleted".
of course that will require you on all your queries to fetch all the "dedicated" == false (which is pretty cumbersome)
My questions are:
Is he right? if so - what exactly is dangerous about deleting exactly?
Does his solution is a good practice?
Any alternatives to this solution are available?
thanks.
This question has multiple layers. In general it is a good idea to mark rows as deleted instead of actually deleting them.
There are a few major benefits:
The data is recoverable. You can provide an undelete to users.
The update is faster than the delete.
In a publicly facing app none of the publicly interactable code has a true delete, making it much more difficult to use that code for inappropriate purposes (sql injection, etc.)
If you ever want to report in your data you can.
There are of course caveats and best practices:
This does not apply to lookup tables with easy to recreate data.
You need to consider culling. In our databases we cull deleted records into archival reporting tables. This keeps the primary tables fast, but allows us to report on data related to "deleted" items.
Your culling performance impact (at largish scale) will be similar to a backup and have similar considerations. Run them off hours if you want to archive them all at once, or periodically via cron if you want to just take X number per hour.
NEVER use the deleted data in your live data. In other words it is not a status flag! It is gone. I've made this mistake before and undoing it was painful.
If there is a very high percentage of deletes in a table ask yourself if keeping the data is actually important. You might adjust your culling process to not archive and to instead just run the actual delete.
This approach will last for a really really long time unless your dataset is massive and deletions are massive. Some architecture astronaut will ask you about what is going to happen when you archive 1 billion rows.... when you get to that point you are either hugely successful and can find another way, or you've screwed something else up so completely your archive tasks won't matter any more relative to the other issues you have.
If you have your schema well structured and use transactions where needed, deletions are perfectly safe and using deletion you will get far better performance than the approach you friend suggests.
Inserting a new element may get a tricky as deleting one. I wonder what hacky approach would your friend suggest to overcome that.
CRUD operations have been here for a long while now and creators of relational databases have done pretty good job in optimizing them. Any attempt to outsmart decades of gradual improvement with such hack will most probably fail.
Applying the solution your friend suggests may result in having a huge database with only a small fraction of non-deleted elements. This way your queries will become slower too.
Now having said all that I would like to support a little bit the other side. There are cases when the solution your friend suggests may be the only option. You can't change your schema everytime some query turns out to be slow. Also as others suggest in their answers if you use the "mark as deleted" approach deleted data will be recoverable(which may or may not be good again mentioned in other answers).
Dangerous? Will the server or data center blow up?
I think your colleague is indulging in some hyperbole.
You need not cascade updates or deletes if you don't wish to, but it can be easier than having to clean up manually. It's a choice that you make when you create your schema.
Marking rows as deleted using a flag is another way to go, but it's just another choice. You'll have to work harder to find all the bad rows and run a batch job to remove them.
if you have retention requirements, it's more typical to partition the schema and move older records off into a warehouse for historical analysis and reporting. In that case you wouldn't be deleting anything, just moving them out after a set period of time.
Yes, he is right. Databases (indexes, specifically) are optimized for insertion and deletion can be painfully slow. Even setting an indexed field to null can cause the same trouble. I see cascading as a lesser issue because the db should never be configured to do dangerous cascading automatically.
Yes, flagging a record as "inactive", "deleted", "deprecated" (your choice) is standard and preferred practice to resolve a deletion-related performance issue.
But, to qualify the above, it only applies to transactional (as opposed to archival) tables, and then only to those specific tables which contain a huge number of rows (millions and more). Do not ham-handedly apply a "best practice" across the board.
Another approach is to simply not have a transactional table with millions of rows. Move the data to an archival table before it grows to such proportions.
The problem with DELETE's in relational databases is that they are unrevertable. You delete data and it's gone. There is no way to restore it (except rollback to an earlier backup, of course). Combined with the SQL syntax, which is based on the principle "take everything I don't explicitely exclude" this can easily lead to unintentional loss of data due to user error or bugs.
Just marking data as deleted but not actually deleting it has the advantage that deleted data can be easily restored. But keep in mind that the marked-as-deleted pattern also has disadvantages:
As you said, programming gets a bit more complicated, because you have to remember that every SELECT must now include a WHERE deleted = false.
When you frequently delete data, your database will accumulate a lot of cruft. This will cause it to grow which impacts performance and uses unnecessary drive space.
When your users are forced to delete data due to privacy regulations and they assume that pressing the "delete" button really deletes it, this practice might inadvertedly cause them to violate these regulations.
We develop and operate a blogging application in which user data a scattered across many tables:
- Blog
- Article
- Comment
- Message
- Trackback
- 50 other tables.
Users are able to close their account, and their account/contents must disappear from the site right away.
For legal/contractual reasons, we also must be able to undelete their account/content for a given duration, and also to make those data available for juridic authorities for another duration.
Over the years and different applications, we used different approaches:
"deleted" flag everywhere : Each table has a "deleted" column, which is updated when data is deleted/restored. Very nasty because it slows down every list generation queries, creates a lot of updates upon deletion/restore. Also, it does not handle the two stage deletion described above. In fact we never used this one, but it's worth dis-advising it :)
"Multi table": For each table, we create a second table with the same schema plus two extra fields (dateDeleted, reason). The extra fields are used to know if the data is still accessible for restoration, when to delete it, and why/how it was deleted in the first place. This version is just a bit better than the previous version, but can be very nasty performance wise too when tables are growing. Also, you have to change the schema of some tables (ie: remove UNIQUE constraints) which makes the system harder to understand/upgrade for new developers, administrators ... and mentally healthy people in general.
"Multi DB": Same approach as before, but we move data on a different database cluster, which allows to browse those data without impacting the "end users" db. Also, for this app, the uniqueness constraint is done at the java level, so all the schemas are the same. Lastly, the double data retention constraint is done by having a dedicated DB for each constraint, which makes things easiers.
I have to admit that none of those approaches satisfies me, even if they can work up to a certain amount of data. I have also imagined that we could just delete some key rows in the DB, and let the rest inconsistent (and scheduled for a more controlled deletion job), but it scares me ...
Do you know other ways of doing the same thing, keeping the same level of features (we could align the two durations to simplify the problem) ? I'm not looking a solution for my existing apps, but would like to improve the next ones.
Any input will be highly appreciated !
It seams that every asset (blog, comment, ...) relies on the user. I would give the user table a column "active" which is 0 or 1, Then you implement a feature to ask on each query for the different asset "user active"? Try to optimize this lookup with indizes or something like that. In my opinion its the cleanst way. After this you can implement a job, which runs a cascading delete on users disabled for longer then x days.