Hibernate not reflecting changes - java

I have a struts application that uses Hibernate to access a MYSQL DB. I have a number of pages that make changes to the DB. These changes go through fine, data is update in the DB. However when browsing to the page that should show this updated information its often not there, and even after a few page refreshes it still isn't there. Eventually it will turn up. I'm assuming this has something to do with hibernate caching data, but how can I ensure that data is up to date? I had assumed that as it all went through the hibernate session it would pick up changes?
The code i'm using to do the update is :
hSession = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = hSession.getTransaction();
tx.begin();
hSession.update(user) ;
Then to pull that user out again:
org.hibernate.Session hSession = HibernateUtil.getSessionFactory()
.getCurrentSession();
Transaction tx = hSession.beginTransaction();
User u= (User) hSession.load(User.class, userID);

Too little information to really give an answer. But some points to check:
You are using transactions. Are you properly committing them? Maybe at some point your code cannot see changes, because the are not yet commited (or because the reading code is in another transaction which uses previous state).
The cache might also be a problem. To check, you could explicitly flush the cache after each change to the DB (Session.flush()). This will probably degrade performance, but might help you narrow down the problem.

It is probably a result of the second level (session factory) hibernate cache.
Caching should be transparent and work fine with the code you gave, but problems occur usually when:
You are running a cluster of machines and do not have a way for the caches to invalidate each other configured
You are updating the database outside of hibernate.
The easiest way to determine if it is a second-level cache problem is to completely disable the cache in your hibernate config. If it is the cache, you can configure a cluster to know about each other so they can manage the cache automatically, or if it is a problem with outside-hibernate updates you can manually invalidate cache items with the hibernate api

Related

How to force Hibernate read external database changes

I have a common database that is used by two different applications (different technologies, different deployment servers, they just use the same database).
Let's call them application #1 and application #2.
Suppose we have the following scenario:
the database contains a table called items (doesn't matter its content)
application #2 is developed in Spring Boot and it is mainly used just for reading data from the database
application #2 retrieves an item from the database
application #1 changes that item
application #2 retrieves the same item again, but the changes are not visible
What I understood by reading a lot of articles:
when application #2 retrieves the item, Hibernate stores it in the first level cache
the changes that are done to the item by application #1 are external changes and Hibernate is unaware of them, and thus, the cache is not updated (same happens when you do a manual change in the database)
you cannot disable Hibernate's first level cache.
So, my question is, can you force Hibernate into refreshing the entities every time they are read (or make it go into the database) without explicitly calling em.refresh(entity)? The problem is that the business logic module from application1 is used as a dependency in application1 so I can only call service methods (i.e. I don't have access to the entityManager or session references).
Hibernate L1 cache is roughly equivalent to a DB transaction when you run in a repeatable-read level isolation. Basically, if you read/write some data, the next time you query in the context of the same session, you will get the same data. Further, within the same process, sessions run independent of each other, which means 2 session are looking at different data in the L1 cache.
If you use repeatable read or less, then you shouldn't really be concerned about the L1 cache, as you might run into this scenario regardless of the ORM (or no ORM).
I think you only need to think about the L2 cache here. The L2 cache is what stores data and assumes only hibernate is accessing the DB, which means that if some change happens in the DB, hibernate might not know about it. If you just disable the L2 cache, you are sorted.
Further reading - Short description of hibernate cache levels
Well, if you cannot access hibernate session you are left with nothing. Any operations you want to do requires session access. For instance you can remove entity from cache after reading it like this:
session.evict(entity);
or this
session.clear();
but first and foremost you need a session. Since you calling only services you need to create service endpoints clearing session cache after serving them or modify existing endpoints to do that.
You can try to use StatelessSession, but you will lose cascading and other things.
https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#_statelesssession
https://stackoverflow.com/a/48978736/3405171
You can force to start a new transaction, so in this manner hibernate will not be read from the cache and it will redo the read from the db.
You can annotate your function in this manner
#Transactional(readOnly = true, propagation = Propagation.REQUIRES_NEW)
Requesting a new transaction, the system will generation a new hibernate session, so the data will not be in the cache.

Hibernate first level cache and get method

I have read that the session.get(Employee.class, new Long(1)) method will take the data from cache or database.
If there are two users who are accessing the application concurrently.
if user - > User1 is doing get then data will be retrieved from DB. Now data is moved to cache.
If user - > User2 has deleted the record or updated the record. then
If user - > User1 is doing get then data will it be retrieved from cache.
Isnt User1 is getting old data. Does it falls to pitfall of caching.
Or am I missing something here?
I can say on this that why User1 is doing 2 times session.get in the same session. But still I need different opinions.
You understand it correctly: the cache is bound to the session, and if an object is loaded into the first-level cache, then no SQL will executed with #get(). You could use #evict() to clear one object from the cache, or #clear() to clear every object from the cache, without closing the session. Closing the session will always delete the entire cache.
See a nice explanation here.
You need to read more about Container-managed entity manager
The most common and widely used entity manager in a Java EE
environment is the container-managed entity manager. In this mode, the
container is responsible for the opening and closing of the entity
manager (this is transparent to the application). It is also
responsible for transaction boundaries. A container-managed entity
manager is obtained in an application through dependency injection or
through JNDI lookup, A container-managed entity manger requires the
use of a JTA transaction.
It's responable of what do you want understand and archive and how is used it.
More doucmentation Entity Mananger
No, because Hibernate saves data on cache, but whether you update the data with Hibernate it will know that some change exists. You will have troubles if you update the data with SQL or from other point where Hibernate cannot see that something happends.

synchronize EntityManager with database

I deleted a record from the database, and the entity manager still references that deleted record.
I have the following query:
List results = em.createNamedQuery("Customers.findNew")
.setParameter("status", "n")
.getResultList();
I am getting back results which include the deleted record. I've read the entity manager caches the database for better performance. This is fine if only the application using the entity manager is accessing the database. But what happens when multiple systems will access the same database?
I have tried:
1. em.flush()
2. em.refresh()
3. em.clear()
right before I use entity manager to query the database, to try and force a re synchronization but none of them do. I am still getting the same record that isn't in the database anymore.
UPDATE
The program I used to delete record, Oracle SQL Developer, didn't commit changes. So JPA was working fine, it was the program I was using to make changes to the database hadn't committed the changes. If you are experiencing similar problem make sure the db admin program committed changes.
EntityManagers cache Objects themselves, for their own use. This is generally save.
In addition, you can enable the 2nd level cache, where inconsistencies can arise between systems. If you enabled this, try to disable it.
Did you delete the entity using the same EntityManager? If no, make sure the transaction to delete the entity is commited and make sure the transaction of your reading EntityManager starts after the deletion is commited.

How to coordinate J2EE and Java EE database access?

We have a somewhat huge application which started a decade ago and is still under active development. So some parts are still in J2EE 1.4 architecture, others using Java EE 5/6.
While testing some new code, I realized that I had data inconsistency between information coming in through old and new code parts, where the old one uses the Hibernate session directly and the new one an injected EntityManager. This led to the problem, that one part couldn't see new data from the other part and thus also created a database record, resulting in primary key constraint violation.
It is planned to migrate the old code completely to get rid of J2EE, but in the meantime - what can I do to coordinate database access between the two parts? And shouldn't at some point within the application server both ways come together in the Hibernate layer, regardless if accessed via JPA or directly?
You can mix both Hibernate Session and Entity Manager in the same application without any problem. The EntityManagerImpl simply delegates calls the a private SessionImpl instance.
What you describe is a Transaction configuration anomaly. Every database transaction runs in isolation (unless you use REAN_UNCOMMITED which I guess it's not the case), but once you commit it the changes are available from any other transaction or connection. So once a transaction is committed you should see al changes in any other Hibernate Session, JDBC connection or even your database UI manager tool.
You said that there was a primary key conflict. This can't happen if you use Hibernate identity or sequence generator. For the old hi-lo generator you can have problems if an external connection tries to insert records in the same table Hibernate uses an old hi/lo identifier generator.
This problem can also occur if there is a master/master replication anomaly. If you have multiple nodes and there is no strict consistency replication you can end up with primar key constraint violations.
Update
Solution 1:
When coordinating the new and the old code trying to insert the same entity, you could have a slect-than-insert logic running in a SERIALIZABLE transaction. The SERIALIZABLE transaction acquires the appropriate locks on tour behalf and so you can still have a default READ_COMMITTED isolation level, while only the problematic Service methods are marked as SERIALIZABLE.
So both the old code and the new code have this logic running a select for checking if there is already a row satisfying the select constraint, only to insert it if nothing is found. The SERIALIZABLE isolation level prevents phantom reads so I think it should prevent constraint violations.
Solution 2:
If you are open to delegate this task to JDBC, you might also investigate the MERGE SQL statement, if your current database supports it. Basically, this is an upsert operation issuing an update or an insert behind the scenes. This command is much more attractive since you can still run it with even on READ_COMMITTED. The only drawback is that you can't use Hibernate for it, and only some databases support it.
If you instanciate separately a SessionFactory for the old code and an EntityManagerFactory for new code, that can lead to different value in first level cache. If during a single Http request, you change a value in old code, but do not immediately commit, the value will be changed in session cache, but it will not be available for new code until it is commited. Independentely of any transaction or database locking that would protect persistent values, that mix of two different Hibernate session can give weird things for in memory values.
I admit that the injected EntityManager still uses Hibernate. IMHO the most robust solution is to get the EntityManagerFactory for the PersistenceUnit and cast it to an Hibernate EntityManagerFactoryImpl. Then you can directly access the the underlying SessionFactory :
SessionFactory sessionFactory = entityManagerFactory.getSessionFactory();
You can then safely use this SessionFactory in your old code, because now it is unique in your application and shared between old and new code.
You still have to deal with the problem of session creation-close and transaction management. I suppose it is allready implemented in old code. Without knowing more, I think that you should port it to JPA, because I am pretty sure that if an EntityManager exists, sessionFactory.getCurrentSession() will give its underlying Session but I cannot affirm anything for the opposite.
I've run into a similar problem when I had a list of enumerated lookup values, where two pieces of code would check for the existence of a given value in the list, and if it didn't exist the code would create a new entry in the database. When both of them came across the same non-existent value, they'd both try to create a new one and one would have its transaction rolled back (throwing away a bunch of other work we'd done in the transaction).
Our solution was to create those lookup values in a separate transaction that committed immediately; if that transaction succeeded, then we knew we could use that object, and if it failed, then we knew we simply needed to perform a get to retrieve the one saved by another process. Once we had a lookup object that we knew was safe to use in our session, we could happily do the rest of the DB modifications without risking the transaction being rolled back.
It's hard to know from your description whether your data model would lend itself to a similar approach, where you'd at least commit the initial version of the entity right away, and then once you're sure you're working with a persistent object you could do the rest of the DB modifications that you knew you needed to do. But if you can find a way to make that work, it would avoid the need to share the Session between the different pieces of code (and would work even if the old and new code were running in separate JVMs).

Hibernate transaction problem

We are using Hibernate Spring MVC with OpenSessionInView filter.
Here is a problem we are running into (pseudo code)
transaction 1
load object foo
transaction 1 end
update foo's properties (not calling session.save or session.update but only foo's setters)
validate foo (using hibernate validator)
if validation fails ?
go back to edit screen
transaction 2 (read only)
load form backing objects from db
transaction 2 end
go to view
else
transaction 3
session.update(foo)
transaction 3 end
the problem we have is if the validation fails
foo is marked "dirty" in the hibernate session (since we use OpenSessionInView we only have one session throughout the http request), when we load the form backing objects (like a list of some entities using an HQL query), hibernate before performing the query checks if there are dirty objects in the session, it sees that foo is and flushes it, when transaction 2 is committed the updates are written to the database.
The problem is that even though it is a read only transaction and even though foo wasn't updated in transaction 2 hibernate doesn't have knowledge of which object was updated in which transaction and doesn't flush only objects from that transaction.
Any suggestions? did somebody ran into similar problem before
Update: this post sheds some more light on the problem: http://brian.pontarelli.com/2007/04/03/hibernate-pitfalls-part-2/
You can run a get on foo to put it into the hibernate session, and then replace it with the object you created elsewhere. But for this to work, you have to know all the ids for your objects so that the ids will look correct to Hibernate.
There are a couple of options here. First is that you don't actually need transaction 2 since the session is open you could just load the backing objects from the db, thus avoiding the dirty check on the session. The other option is to evict foo from the session after it is retrieved and later use session.merge() to reattach it when you what your changes to be stored.
With hibernate it is important to understand what exactly is going on under the covers. At every commit boundary it will attempt to flush all changes to objects in the current session regardless of whether or not the changes where made in the current transaction or any transaction at all for that matter. This is way you don't actually need to call session.update() for any object that is already in the session.
Hope this helps
There is a design issue here. Do you think an ORM is a transparent abstraction of your datastore, or do you think it's a set of data manipulation libraries? I would say that Hibernate is the former. Its whole reason for existing is to remove the distinction between your in-memory object state and your database state. It does provide low-level mechanisms to allow you to pry the two apart and deal with them separately, but by doing so you're removing a lot of Hibernate's value.
So very simply - Hibernate = your database. If you don't want something persisted, don't change your persistent objects.
Validate your data before you update your domain objects. By all means validate domain objects as well, but that's a last line of defense. If you do get a validation error on a persistent object, don't swallow the exception. Unless you prevent it, Hibernate will do the right thing, which is to close the session there and then.
What about using Session.clear() and/or Session.evict()?
What about setting singleSession=false on the filter? That might put your operations into separate sessions so you don't have to deal with the 1st level cache issues. Otherwise you will probably want to detach/attach your objects manually as the user above suggests. You could also change the FlushMode on your Session if you don't want things being flushed automatically (FlushMode.MANUAL).
Implement a service layer, take a look at spring's #Transactional annotation, and mark your methods as #Transactional(readOnly=true) where applicable.
Your flush mode is probably set to auto, which means you don't really have control of when a DB commit happens.
You could also set your flush mode to manual, and your services/repos will only try to synchronize the db with your app when you tell them to.

Categories