Force Hibernate to ignore "invalid" columns - java

This is related to my other question here. I'm simply looking for a way (an annotation, system configuration etc) to make Hibernate skip over, or ignore columns in models, which do not align with the current ResultSet. To add clarification- I'm not referring to Transient properties, as these properties are all persisted, just each is not involved in every query.
Here is a quick summary of the situation:
I do not have control over the tables, or the queries, everything is done with stored procedures (business decision, not mine).
Because each stored procedure can vary vastly between one another, my idea is to consolidate "branching models" (models that only differ from each other by 1-2 columns) into several larger models, which I can then use to insert or retrieve data, while doing the mapping using DTOs.
The only hang-up in this at the moment, is that Hibernate will throw a SQLException during hydration of a model entity, if it cannot find a matching column in the current ResultSet (funny enough, it doesn't seem to care about the other way around- i.e. if there's little to no columns in the entity, as long as one matches to the current ResultSet).
Any help would be much appreciated, as I've been trying to figure this out for a while now, and I'm about to go back to making each stored procedure have its own entity.

That's not really possible and I would also not recommend this as that could hide mapping errors. If you can't convince the database team to add at least synthetic columns with constant values such that you can reuse the same mapping, you are unfortunately out of luck and will have to continue with your multiple mappings.

Related

How to optimize one big insert with hibernate

For my website, I'm creating a book database. I have a catalog, with a root node, each node have subnodes, each subnode has documents, each document has versions, and each version is made of several paragraphs.
In order to create this database the fastest possible, I'm first creating the entire tree model, in memory, and then I call session.save(rootNode)
This single save will populate my entire database (at the end when I'm doing a mysqldump on the database it weights 1Go)
The save coasts a lot (more than an hour), and since the database grows with new books and new versions of existing books, it coasts more and more. I would like to optimize this save.
I've tried to increase the batch_size. But it changes nothing since it's a unique save. When I mysqldump a script, and I insert it back into mysql, the operation coast 2 minutes or less.
And when I'm doing a "htop" on the ubuntu machine, I can see the mysql is only using 2 or 3 % CPU. Which means that it's hibernate who's slow.
If someone could give me possible techniques that I could try, or possible leads, it would be great... I already know some of the reasons, why it takes time. If someone wants to discuss it with me, thanks for his help.
Here are some of my problems (I think): For exemple, I have self assigned ids for most of my entities. Because of that, hibernate is checking each time if the line exists before it saves it. I don't need this because, the batch I'm executing, is executed only one, when I create the databse from scratch. The best would be to tell hibernate to ignore the primaryKey rules (like mysqldump does) and reenabeling the key checking once the database has been created. It's just a one shot batch, to initialize my database.
Second problem would be again about the foreign keys. Hibernate inserts lines with null values, then, makes an update in order to make foreign keys work.
About using another technology : I would like to make this batch work with hibernate because after, all my website is working very well with hibernate, and if it's hibernate who creates the databse, I'm sure the naming rules, and every foreign keys will be well created.
Finally, it's a readonly database. (I have a user database, which is using innodb, where I do updates, and insert while my website is running, but the document database is readonly and mYisam)
Here is a exemple of what I'm doing
TreeNode rootNode = new TreeNode();
recursiveLoadSubNodes(rootNode); // This method creates my big tree, in memory only.
hibernateSession.beginTrasaction();
hibernateSession.save(rootNode); // during more than an hour, it saves 1Go of datas : hundreads of sub treeNodes, thousands of documents, tens of thousands paragraphs.
hibernateSession.getTransaction().commit();
It's a little hard to guess what could be the problem here but I could think of 3 things:
Increasing batch_size only might not help because - depending on your model - inserts might be interleaved (i.e. A B A B ...). You can allow Hibernate to reorder inserts and updates so that they can be batched (i.e. A A ... B B ...).Depending on your model this might not work because the inserts might not be batchable. The necessary properties would be hibernate.order_inserts and hibernate.order_updates and a blog post that describes the situation can be found here: https://vladmihalcea.com/how-to-batch-insert-and-update-statements-with-hibernate/
If the entities don't already exist (which seems to be the case) then the problem might be the first level cache. This cache will cause Hibernate to get slower and slower because each time it wants to flush changes it will check all entries in the cache by iterating over them and calling equals() (or something similar). As you can see that will take longer with each new entity that's created.To Fix that you could either try to disable the first level cache (I'd have to look up whether that's possible for write operations and how this is done - or you do that :) ) or try to keep the cache small, e.g. by inserting the books yourself and evicting each book from the first level cache after the insert (you could also go deeper and do that on the document or paragraph level).
It might not actually be Hibernate (or at least not alone) but your DB as well. Note that restoring dumps often removes/disables constraint checks and indices along with other optimizations so comparing that with Hibernate isn't that useful. What you'd need to do is create a bunch of insert statements and then just execute those - ideally via a JDBC batch - on an empty database but with all constraints and indices enabled. That would provide a more accurate benchmark.
Assuming that comparison shows that the plain SQL insert isn't that much faster then you could decide to either keep what you have so far or refactor your batch insert to temporarily disable (or remove and re-create) constraints and indices.
Alternatively you could try not to use Hibernate at all or change your model - if that's possible given your requirements which I don't know. That means you could try to generate and execute the SQL queries yourself, use a NoSQL database or NoSQL storage in a SQL database that supports it - like Postgres.
We're doing something similar, i.e. we have Hibernate entities that contain some complex data which is stored in a JSONB column. Hibernate can read and write that column via a custom usertype but it can't filter (Postgres would support that but we didn't manage to enable the necessary syntax in Hibernate).

Audit Using Hibernate Envers

I am using hibernate envers for making history of my data, it's working fine as well. The problem here is, it's creating duplicate data in history table i.e. creating data in history table whether there is any change in audited table or not. I want only changed fields stored in my history table. I am new to hibernate envers. What can I do?
If I understand your question correctly, Envers doesn't work that way, at least not out of the box.
Envers is a commit-snapshot auditing solution where just before commit, it examines audited entity state and determines whether any attributes have been modified or not and records a snapshot of all audited fields of that entity at that point in time. This means that the only time an audit entry isn't created is when no attributes have been modified.
But it also uses the snapshot approach because it fits really well with the Query API.
Consider the inefficiency that would occur if a query to find an entity at a given revision had to read all rows from that revision back to the beginning of time, iterating each row and merging the column state captured to just instantiate a single row result-set.
With the snapshot approach, it boils down to the following query, no loops or iterative work.
SELECT e FROM AuditedEntity e WHERE e.revisionNumber = :revisionNumber
This is far more efficient from a I/O perspective both with the database reading the data pages and the network for streaming a single row result-set rather than multi-row result-set to the client.
I'd say in this case, the saying "space is cheap" really holds true when you compare that against the cost and inefficiencies your application would face doing it any other way.
If this is something you'd like Envers to support, perhaps via some user configured strategy then you're welcomed to log a new feature request in JIRA for hibernate-envers and I can take a look at its feasibility.
I had similar problem.
In my case the error was that audited field had higher precision than the database field. Please see my reply to another thread: https://stackoverflow.com/a/65844949/13381019

Hibernate pagination with ____ToMany mapping

I'm writing this on the fly on my phone, so forgive the crappy code samples.
I have entities with a manytomany relationship:
#JoinTable(name="foo", #JoinColum="...", #InverseJoinColumn="...")
#ManyToMany
List list = new ArrayList();
I want their data to be retrieved in a paginated way.
I know about setFirstResult and setMaxResults. Is there a way to use this with the mapping? As in, I retrieve the object and get the list filled with contents equal to the amount of records for a single page, with the appropriate offset.
I guess I'm just unclear of the best way to do this. I could just manually use hibernate criteria to have the effect, but I feel thats missing the API. I have this mapping, I want to see if there's a way to use it in a paginated way.
PS. If this is impractical, just say. Also, if it is, can I still use the mapping to add new entries to the join table. As in, if the entity is a persisted entity in the DB, but I haven't fetched the manytomany list, can I add something new to it and when its persisted with cascade all it'll be added to the join table without clearing the other entries?
The type of the relationship between entities that are part of your query isn't that important. There are a couple of ways to tackle this.
If your database supports the LIMIT keyword in it's queries, you would be able to use it to get data sets, assuming you sort your data. Note that if your data changes while your user is navigating between pages, you might see some duplication or miss some records. You'll be stuck having to rewrite if your database changes to one that doesn't have the LIMIT keyword.
If you need to freeze the data at the point of the original query you need to use a 3rd party framework or write your own to fetch a list of Ids for your query then split up that list and fetch by id in a subset for pagination. This is more reliable can be made to work for any database.
Displaytag is a data paging framework I've used and that I therefore can tell you works well for large datasets. It's also one of the older solutions for this problem and is not part of an extended framework.
http://displaytag.sourceforge.net/11/tut_externalSortAndPage.html
Table sorter is another one I came across. This one uses JQuery and fetches the entire data set in one query, so strictly speaking it doesn't meet your "fetches the data in a paginated way" criteria. (This might not be appropriate for large sets).
http://tablesorter.com/docs/
This tutorial might be helpful:
http://theopentutorials.com/examples/java-ee/jsp/pagination-in-servlet-and-jsp/
If you're already using a framework take a look at whether that framework has tackled pagination:
Spring MVC provides a data pager
http://blog.fawnanddoug.com/2012/05/pagination-with-spring-mvc-spring-data.html
GWT provides a data pager:
http://www.gwtproject.org/javadoc/latest/com/google/gwt/user/cellview/client/SimplePager.html
The following refrences might be helpful too:
JDBC Pagination
which also points to:
http://java.avdiel.com/Tutorials/JDBCPaging.html

Variable structure in a DB table to be read in a Web form

I have a "variable" structure to be put in a table DB. By "variable" I mean a sequence of couples field/value in which the "kind" of field determines the value type, I don't know exactly field order and I don't know how many times fields can repeat. Sometimes group of fields will repeat several times (it is a fiscal model).
Additional requirement: I should map these variable data into web page forms, handling some CRUD work.
JQuery-ui, Struts 2, Hibernate. Preferred DBMS: MySQL.
The solutions I thought of:
vertical table. I could have some performance issue, which I could resolve with materialized views that "pivot" the rows in columns when I need massive data process. Not gone so far in this direction as it seems to be very expensive for development.
LOB fields. Pack my columns into one of those, perhaps having a "mapping" table to decode each column. My idea is to pull-out searchable fields as "real" columns in order to leave in the LOB just the less interesting mob of data and not to generate performance problems.
or better 2a. Use an xml inside the LOB field. This could be useful to pack/unpack data more comfortably, specially having to map data to a web form.
What do you think? And more, is there some way to create automatic views from xml fields? Or better to map such data to web form? I suspect Hibernate Tools won't work in any of the cases I described.
I hope I have been clear, it's still a bit confusing even to me :)
Your option 1 is the Entity-Attribute-Value antipattern.
See my answer to Product table, many kinds of product, each product has many parameters and my blog post EAV FAIL for alternatives and some reasons why EAV is wrong, at least for a relational database (I cover EAV in my book, SQL Antipatterns Volume 1: Avoiding the Pitfalls of Database Programming).
Also read this article about how a similar structure nearly doomed a company: Bad CaRMa.
Your options 2 & 3 are similar to described in How FriendFeed uses MySQL to store schema-less data. I don't know of any automatic way for an ORM to maintain that structure for you. You do have the chore of keeping your inverted index tables in sync with your LOB data.

Partitioning with Hibernate

We have a requirement to delete data in the range of 200K from database everyday. Our application is Java/Java EE based using Oracle DB and Hibernate ORM tool.
We explored various options like
Hibernate batch processing
Stored procedure
Database partitioning
Our DBA suggests database partitioning is the best way to go, so we can easily recreate and drop the partitioned table everyday. Now the issue is we have 2 kinds of data, one which we want to delete everyday and the other which we want to keep it. Suppose this data is stored in table "Trade". Now with partitioning, we have 2 tables "Trade". We have already existing Hibernate based DAO layer to fetch/store trades from/to DB. When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate. Basically I want , the trades need to be deleted by end of the day, to go in partitioned table and the trades I want to keep, in main table. Please suggest how can this be possible with Hibernate. We may add an additional column to identify the trades to be deleted but how can we ensure these trades should go to partitioned trade table using hibernate.
I would appreciate if someone can suggest any better approach in case we are on wrong path.
When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate.
That's what Hibernate Shards is for.
You could use hibernate inheritance strategy.
If you know at object creation that it will be deleted by the end of the day, you can create a VolatileTrade that is a subclass of Trade (with no other attribute). Use the 'table per concrete class' strategy (section 9.1.5 of hibernate 3.3 reference documentation) for the mapping.
(I think i would do an abstract superclass Trade, and two concrete subclasses : PersistentTrade and VolatileTrade, so that if you have some other classes that you know will reference only PersistentTrade (or Volatile), you can constrain that in your code. If you had used the Trade superclass as the PersistentTrade, you won't be able to enforce that.)
The volatile trade will go in one table and the 'persitent' trade will go in another table.
Be aware that you won't be able to set a fk constraint on any Trade (persistent and volatile) from other table in the db.
Then you just have to clear the table when you want.
Be careful to define a locking mechanism so that no other thread will try to write data to the table during the drop and the create (if you use that). That won't be an easy task, and doing it rightfully might impact the performance of all operation inserting data in the table (as it will require acquiring the lock).
Wouldn't it be more easy to truncate the table ?

Categories