I deleted a record from the database, and the entity manager still references that deleted record.
I have the following query:
List results = em.createNamedQuery("Customers.findNew")
.setParameter("status", "n")
.getResultList();
I am getting back results which include the deleted record. I've read the entity manager caches the database for better performance. This is fine if only the application using the entity manager is accessing the database. But what happens when multiple systems will access the same database?
I have tried:
1. em.flush()
2. em.refresh()
3. em.clear()
right before I use entity manager to query the database, to try and force a re synchronization but none of them do. I am still getting the same record that isn't in the database anymore.
UPDATE
The program I used to delete record, Oracle SQL Developer, didn't commit changes. So JPA was working fine, it was the program I was using to make changes to the database hadn't committed the changes. If you are experiencing similar problem make sure the db admin program committed changes.
EntityManagers cache Objects themselves, for their own use. This is generally save.
In addition, you can enable the 2nd level cache, where inconsistencies can arise between systems. If you enabled this, try to disable it.
Did you delete the entity using the same EntityManager? If no, make sure the transaction to delete the entity is commited and make sure the transaction of your reading EntityManager starts after the deletion is commited.
Related
I've got a Spring application using Hibernate. I've implemented Envers into it, which is working fine. However, Hibernate will by default automatically flush before some transactions are committed.
For example, I have an MVC endpoint that will update a record, but before saving it, will have to make various other queries to retrieve some other data. Each time another query is run, Hibernate flushes and this results in there being multiple audit rows for each change. This creates some confusion, as there is already a modified date on my record which isn't changed in each update (as it's flushing before this property is changed).
What are my options for managing this more effectively, and creating a reliable audit log even with Hibernate flushing in this way? Is the only answer to implement my own listener with some custom logic to check if it should actually be committing an audit change or not?
You can detach the entity and merge when you are done. These queries are only executed if they touch tables that would be affected by pending inserts/updates/deletes. If you use native queries, this is a different topic. Hibernate has no SQL parser to figure out which tables you are touching so it is conservative and flushes all pending changes.
I have read that the session.get(Employee.class, new Long(1)) method will take the data from cache or database.
If there are two users who are accessing the application concurrently.
if user - > User1 is doing get then data will be retrieved from DB. Now data is moved to cache.
If user - > User2 has deleted the record or updated the record. then
If user - > User1 is doing get then data will it be retrieved from cache.
Isnt User1 is getting old data. Does it falls to pitfall of caching.
Or am I missing something here?
I can say on this that why User1 is doing 2 times session.get in the same session. But still I need different opinions.
You understand it correctly: the cache is bound to the session, and if an object is loaded into the first-level cache, then no SQL will executed with #get(). You could use #evict() to clear one object from the cache, or #clear() to clear every object from the cache, without closing the session. Closing the session will always delete the entire cache.
See a nice explanation here.
You need to read more about Container-managed entity manager
The most common and widely used entity manager in a Java EE
environment is the container-managed entity manager. In this mode, the
container is responsible for the opening and closing of the entity
manager (this is transparent to the application). It is also
responsible for transaction boundaries. A container-managed entity
manager is obtained in an application through dependency injection or
through JNDI lookup, A container-managed entity manger requires the
use of a JTA transaction.
It's responable of what do you want understand and archive and how is used it.
More doucmentation Entity Mananger
No, because Hibernate saves data on cache, but whether you update the data with Hibernate it will know that some change exists. You will have troubles if you update the data with SQL or from other point where Hibernate cannot see that something happends.
Our java application is using Toplink JPA to connect the data access layer to our SQL Server 2008 database.
We can query the database and get our results without any issue. The problem is that if we try and change the returned entity, it persists to the database as soon as the setter is called.
Query rQuery = em.createNamedQuery("Region.findAll");
Region r= rQuery.getResultList();
r.setActive(active);
From what we've been reading on JPA, it seems like it shouldn't send the changes to the database until persist/merge/flush is called. This is the behavior we're looking to have. We want to be able to make all of our changes and then send them all at once. If we send them one at a time and we get an error, we could end up with partially updated records.
I've tried setting the entity manager flush mode to commit in order to force it to wait for a commit call before persisting and it didn't make any difference.
em.setFlushMode(FlushModeType.COMMIT);
I also tried detaching the returned entity before calling the setters, but it throws an exception.
java.lang.AbstractMethodError: oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerImpl.detach(Ljava/lang/Object;)
This is our first time using Toplink JPA and we don't know what else to try.
If anyone has any other suggestions on how to fix this issue I'd really appreciate it.
If your updating code to the entity is done within a method marked with a transaction then by the end of that method changes will be committed since the transaction must commit any changes before it closes.
Merge is used if the entity is detached ,you made some changes to that entity and you need to re-attach it again and merge the update in the database.
The solution is to make the query for data in a different method with transaction attribute 'Require_new' and pass the result set to the method that do the update with transaction attribute 'required' or 'mandatory'.
I am new to Hibernate. Please tell me what is the use of getHibernateTemplate().flush() and how it works.
When using Hibernate, entities are loaded into a persistence context called the session and changes like creating, updating, deleting persistent objects are actually made in memory. When you want or need to synchronize the in memory state with the database to make changes persistent, you need to flush the session, causing Hibernate to generate the appropriate SQL insert, update, delete statements.
I have a struts application that uses Hibernate to access a MYSQL DB. I have a number of pages that make changes to the DB. These changes go through fine, data is update in the DB. However when browsing to the page that should show this updated information its often not there, and even after a few page refreshes it still isn't there. Eventually it will turn up. I'm assuming this has something to do with hibernate caching data, but how can I ensure that data is up to date? I had assumed that as it all went through the hibernate session it would pick up changes?
The code i'm using to do the update is :
hSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hSession.getTransaction();
tx.begin();
hSession.update(user) ;
Then to pull that user out again:
org.hibernate.Session hSession = HibernateUtil.getSessionFactory()
.getCurrentSession();
Transaction tx = hSession.beginTransaction();
User u= (User) hSession.load(User.class, userID);
Too little information to really give an answer. But some points to check:
You are using transactions. Are you properly committing them? Maybe at some point your code cannot see changes, because the are not yet commited (or because the reading code is in another transaction which uses previous state).
The cache might also be a problem. To check, you could explicitly flush the cache after each change to the DB (Session.flush()). This will probably degrade performance, but might help you narrow down the problem.
It is probably a result of the second level (session factory) hibernate cache.
Caching should be transparent and work fine with the code you gave, but problems occur usually when:
You are running a cluster of machines and do not have a way for the caches to invalidate each other configured
You are updating the database outside of hibernate.
The easiest way to determine if it is a second-level cache problem is to completely disable the cache in your hibernate config. If it is the cache, you can configure a cluster to know about each other so they can manage the cache automatically, or if it is a problem with outside-hibernate updates you can manually invalidate cache items with the hibernate api