Is it possible to update all table data in one query?
I have a Database table Person and corresponding Entiry PersonEntity, I can get all Person data vi a JPA in a list such as List personAll.
I have several CRUD operation on personAll instance, I want to reflect all these changes to the Database in one hand using Hibernate JPA
In other words I want content of Person Table is replaced with new content of personAll instance?
Actually long solution way of this question is execute several insert, delete and update operations. But there should be a easy way of doing it?
I can do similar thing when there are two tables Shool Student table if there is OneToMany relation between eash other? Hibernate JPA value removing OneToMany relation
Thanks
It depends on how many rows there are in your table.
If you load all the rows into a Hibernate session and modify the returned instances as required, any changes will be automatically persisted to the database by Hibernate when the session is flushed.
The reason it depends on the size is that if you load the contents of a huge table into a Hibernate session you risk out of memory errors and even if you don't run out of memory the flush will be very slow since every entity in the session much be checked for modifications.
Related
I want to design a system. There are different customers using this system. I need to create the duplicated tables for every customer. For example, I have a table Order, then all of order records for customerA are in table Order_A, as well as customerB data are in table Order_B. I can distinct different customers from session, but how can I let Spring JPA to reflect the RDS table data to Java object?
I know 2 solutions, but both are not satisfied.
Consider to use Mybatis because it supports load SQL from xml file and parameters inside SQL;
Consider to use org.hibernate.EmptyInterceptor. This is my current implement in my project. For every entity, I must define a subclass of it. It can update the SQL before Hibernate's execution.
However, both are not graceful. I prefer the better solution.
I am using hibernate envers for making history of my data, it's working fine as well. The problem here is, it's creating duplicate data in history table i.e. creating data in history table whether there is any change in audited table or not. I want only changed fields stored in my history table. I am new to hibernate envers. What can I do?
If I understand your question correctly, Envers doesn't work that way, at least not out of the box.
Envers is a commit-snapshot auditing solution where just before commit, it examines audited entity state and determines whether any attributes have been modified or not and records a snapshot of all audited fields of that entity at that point in time. This means that the only time an audit entry isn't created is when no attributes have been modified.
But it also uses the snapshot approach because it fits really well with the Query API.
Consider the inefficiency that would occur if a query to find an entity at a given revision had to read all rows from that revision back to the beginning of time, iterating each row and merging the column state captured to just instantiate a single row result-set.
With the snapshot approach, it boils down to the following query, no loops or iterative work.
SELECT e FROM AuditedEntity e WHERE e.revisionNumber = :revisionNumber
This is far more efficient from a I/O perspective both with the database reading the data pages and the network for streaming a single row result-set rather than multi-row result-set to the client.
I'd say in this case, the saying "space is cheap" really holds true when you compare that against the cost and inefficiencies your application would face doing it any other way.
If this is something you'd like Envers to support, perhaps via some user configured strategy then you're welcomed to log a new feature request in JIRA for hibernate-envers and I can take a look at its feasibility.
I had similar problem.
In my case the error was that audited field had higher precision than the database field. Please see my reply to another thread: https://stackoverflow.com/a/65844949/13381019
I'm getting NonUniqueObjectException in hibernate.
There is one Item class, I saved list of Item objects using session.save of hibernate.
Now in the same transaction, I'm trying to update same Items using raw sql query which has join with another table. This gives me NonUniqueObjectException. The two tables I'm joining are unrelated as entities for hibernate, that is, there is no foreign key relation.
So I have 2 questions:
First, is there any way of using hql for writing inner join queries in hibernate.
Second, how to avoid NonUniqueObjectException.
One of the things that is working is that I clear the session before making any raw sql query. Any better approach is welcomed.
I am stuck with an issue. I have 3 tables that are associated with a table in one to many relationship.
An employee may have one or more degrees.
An employee may have one or more departments in past
An employee may have one or more Jobs
I am trying to fetch results using named query in a way that I fetch all the results from Degree table and Department table, but only 5 results from Jobs table. Because I want to apply pagination on Jobs table.
But, all these entities are in User tables as a set. Secondly, I don't want to change mapping file because of other usages of same files and due to some architectural restrictions.
Else in case of mapping I could use BatchSize annotation in mapping file, which I am not willing to do.
The best approach is to write three queries:
userRepository.getDegrees(userId);
userRepository.getDepartments(userId);
userRepository.getJobs(userId, pageIndex);
Spring Data is very useful for pagination, as well as simplifying your data access code.
Hibernate cannot fetch multiple Lists in a single query, and even for Sets, you don't want to run a Cartesian Product. So use queries instead of a single JPQL query.
We have a requirement to delete data in the range of 200K from database everyday. Our application is Java/Java EE based using Oracle DB and Hibernate ORM tool.
We explored various options like
Hibernate batch processing
Stored procedure
Database partitioning
Our DBA suggests database partitioning is the best way to go, so we can easily recreate and drop the partitioned table everyday. Now the issue is we have 2 kinds of data, one which we want to delete everyday and the other which we want to keep it. Suppose this data is stored in table "Trade". Now with partitioning, we have 2 tables "Trade". We have already existing Hibernate based DAO layer to fetch/store trades from/to DB. When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate. Basically I want , the trades need to be deleted by end of the day, to go in partitioned table and the trades I want to keep, in main table. Please suggest how can this be possible with Hibernate. We may add an additional column to identify the trades to be deleted but how can we ensure these trades should go to partitioned trade table using hibernate.
I would appreciate if someone can suggest any better approach in case we are on wrong path.
When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate.
That's what Hibernate Shards is for.
You could use hibernate inheritance strategy.
If you know at object creation that it will be deleted by the end of the day, you can create a VolatileTrade that is a subclass of Trade (with no other attribute). Use the 'table per concrete class' strategy (section 9.1.5 of hibernate 3.3 reference documentation) for the mapping.
(I think i would do an abstract superclass Trade, and two concrete subclasses : PersistentTrade and VolatileTrade, so that if you have some other classes that you know will reference only PersistentTrade (or Volatile), you can constrain that in your code. If you had used the Trade superclass as the PersistentTrade, you won't be able to enforce that.)
The volatile trade will go in one table and the 'persitent' trade will go in another table.
Be aware that you won't be able to set a fk constraint on any Trade (persistent and volatile) from other table in the db.
Then you just have to clear the table when you want.
Be careful to define a locking mechanism so that no other thread will try to write data to the table during the drop and the create (if you use that). That won't be an easy task, and doing it rightfully might impact the performance of all operation inserting data in the table (as it will require acquiring the lock).
Wouldn't it be more easy to truncate the table ?