How long are Entities persited? - java

I am trying to understand how JPA works. From what I know, if you persist an Entity, that object will remain in the memory until the application is closed. This means, that when I look for a previously persisted entity, there will be no query made on the database. Assuming that no insert, update or delete is made, if the application runs long enough, all the information in it might become persistent. Does this mean that at some point, I will no longer need the database?
Edit
My problem is not with the database. I am sure that the database can not be modified from outside the application. I am managing transactions by myself, so the data gets stored in the database as soon as I commit. My question is: What happens with the entities after I commit? Are they kept in the memory and act like a cache? If so, how long are they kept there? After I commit a persist, I make a select query. This select should return the object I persisted before. Will that object be brought from memory, or will the application query the database?

Not really. Think about it.
Your application probably isn't the only thing that will use the database. If an entity was persisted once and stored in memory, how can you be sure that, let's say, one hour later, it won't be changed by some other means? If that happens, you will have stale data that can harm logic of your application.
Storing data in memory and hoping that everything will be alright won't bring any benefits. That's why data stored in database is your primary source of information, and you should query it every time, unless you are absolutely sure that a subset of data won't change.

When you persist an entity an entity this will add it to the persistence context which acts like a first level cache (this is in-memory). When the actual persisting happens depends on whether you use container managed transactions or deal with transactions yourself. The entity instance will live in memory as long as the transaction is not commited, and when it is it will be persisted to the database or XML etc.

JPA can't work with only the persistence context (L1 cache) or the explicit cache (L2 cache). It always needs to be combined with a datasource, and this datasource typically points to a database that persists to stable storage.
So, the entity is in memory only as long as the transaction (which is required for JPA persist operations) isn't committed. After that it's send to the datasource.
If the transaction manager is transaction scoped (the 'normal' case) then the L1 cache (the persistence context) is closed and the entities do not longer exist there. If the L1 cache somehow bothers you, you can manage it a bit explicitly. There are operations to clear it and you could separate your read operations (which don't need transactions) from write operations. If there's no transaction active when reading, there's no persistence context, an entity becomes never attached and is thus never put into this L1 cache.
The L2 cache however is not cleared when the transaction commits and entities inside it remain available for the entire application. This L2 cache must be explicitly configured and you as an application developer must indicate which entities should be cached in it. Via vendor specific mechanisms (e.g. JBoss Cache, Infinispan) you can put a max on the number of entities being cached and set/define so-called eviction policies.
Of course, nothing prevents you from letting the datasource point to an in-memmory embedded DB, but this is outside the knowledge of JPA.

Persistence means in short terms: you can shut down your app, and the data is not lost.
To achieve that you need a database or some sort of saving data in a way that it's not lost when you shut down the app.

To "persist" an entity means to actually save it in the data base. Sure, JPA maintains some entity information in memory in the persistence context (and this is highly dependent on configuration and programming practices), but at certain point information will be stored in the data base - for instance, when a transaction commits, or likely (but not necessarily) after flush() or merge() operations.

If you want to keep your entities after committing and for a select query, you need to use the query cache. Just Google around on that term and it should be clear to you.

Related

How to force Hibernate read external database changes

I have a common database that is used by two different applications (different technologies, different deployment servers, they just use the same database).
Let's call them application #1 and application #2.
Suppose we have the following scenario:
the database contains a table called items (doesn't matter its content)
application #2 is developed in Spring Boot and it is mainly used just for reading data from the database
application #2 retrieves an item from the database
application #1 changes that item
application #2 retrieves the same item again, but the changes are not visible
What I understood by reading a lot of articles:
when application #2 retrieves the item, Hibernate stores it in the first level cache
the changes that are done to the item by application #1 are external changes and Hibernate is unaware of them, and thus, the cache is not updated (same happens when you do a manual change in the database)
you cannot disable Hibernate's first level cache.
So, my question is, can you force Hibernate into refreshing the entities every time they are read (or make it go into the database) without explicitly calling em.refresh(entity)? The problem is that the business logic module from application1 is used as a dependency in application1 so I can only call service methods (i.e. I don't have access to the entityManager or session references).
Hibernate L1 cache is roughly equivalent to a DB transaction when you run in a repeatable-read level isolation. Basically, if you read/write some data, the next time you query in the context of the same session, you will get the same data. Further, within the same process, sessions run independent of each other, which means 2 session are looking at different data in the L1 cache.
If you use repeatable read or less, then you shouldn't really be concerned about the L1 cache, as you might run into this scenario regardless of the ORM (or no ORM).
I think you only need to think about the L2 cache here. The L2 cache is what stores data and assumes only hibernate is accessing the DB, which means that if some change happens in the DB, hibernate might not know about it. If you just disable the L2 cache, you are sorted.
Further reading - Short description of hibernate cache levels
Well, if you cannot access hibernate session you are left with nothing. Any operations you want to do requires session access. For instance you can remove entity from cache after reading it like this:
session.evict(entity);
or this
session.clear();
but first and foremost you need a session. Since you calling only services you need to create service endpoints clearing session cache after serving them or modify existing endpoints to do that.
You can try to use StatelessSession, but you will lose cascading and other things.
https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#_statelesssession
https://stackoverflow.com/a/48978736/3405171
You can force to start a new transaction, so in this manner hibernate will not be read from the cache and it will redo the read from the db.
You can annotate your function in this manner
#Transactional(readOnly = true, propagation = Propagation.REQUIRES_NEW)
Requesting a new transaction, the system will generation a new hibernate session, so the data will not be in the cache.

Hibernate first level cache and get method

I have read that the session.get(Employee.class, new Long(1)) method will take the data from cache or database.
If there are two users who are accessing the application concurrently.
if user - > User1 is doing get then data will be retrieved from DB. Now data is moved to cache.
If user - > User2 has deleted the record or updated the record. then
If user - > User1 is doing get then data will it be retrieved from cache.
Isnt User1 is getting old data. Does it falls to pitfall of caching.
Or am I missing something here?
I can say on this that why User1 is doing 2 times session.get in the same session. But still I need different opinions.
You understand it correctly: the cache is bound to the session, and if an object is loaded into the first-level cache, then no SQL will executed with #get(). You could use #evict() to clear one object from the cache, or #clear() to clear every object from the cache, without closing the session. Closing the session will always delete the entire cache.
See a nice explanation here.
You need to read more about Container-managed entity manager
The most common and widely used entity manager in a Java EE
environment is the container-managed entity manager. In this mode, the
container is responsible for the opening and closing of the entity
manager (this is transparent to the application). It is also
responsible for transaction boundaries. A container-managed entity
manager is obtained in an application through dependency injection or
through JNDI lookup, A container-managed entity manger requires the
use of a JTA transaction.
It's responable of what do you want understand and archive and how is used it.
More doucmentation Entity Mananger
No, because Hibernate saves data on cache, but whether you update the data with Hibernate it will know that some change exists. You will have troubles if you update the data with SQL or from other point where Hibernate cannot see that something happends.

How to coordinate J2EE and Java EE database access?

We have a somewhat huge application which started a decade ago and is still under active development. So some parts are still in J2EE 1.4 architecture, others using Java EE 5/6.
While testing some new code, I realized that I had data inconsistency between information coming in through old and new code parts, where the old one uses the Hibernate session directly and the new one an injected EntityManager. This led to the problem, that one part couldn't see new data from the other part and thus also created a database record, resulting in primary key constraint violation.
It is planned to migrate the old code completely to get rid of J2EE, but in the meantime - what can I do to coordinate database access between the two parts? And shouldn't at some point within the application server both ways come together in the Hibernate layer, regardless if accessed via JPA or directly?
You can mix both Hibernate Session and Entity Manager in the same application without any problem. The EntityManagerImpl simply delegates calls the a private SessionImpl instance.
What you describe is a Transaction configuration anomaly. Every database transaction runs in isolation (unless you use REAN_UNCOMMITED which I guess it's not the case), but once you commit it the changes are available from any other transaction or connection. So once a transaction is committed you should see al changes in any other Hibernate Session, JDBC connection or even your database UI manager tool.
You said that there was a primary key conflict. This can't happen if you use Hibernate identity or sequence generator. For the old hi-lo generator you can have problems if an external connection tries to insert records in the same table Hibernate uses an old hi/lo identifier generator.
This problem can also occur if there is a master/master replication anomaly. If you have multiple nodes and there is no strict consistency replication you can end up with primar key constraint violations.
Update
Solution 1:
When coordinating the new and the old code trying to insert the same entity, you could have a slect-than-insert logic running in a SERIALIZABLE transaction. The SERIALIZABLE transaction acquires the appropriate locks on tour behalf and so you can still have a default READ_COMMITTED isolation level, while only the problematic Service methods are marked as SERIALIZABLE.
So both the old code and the new code have this logic running a select for checking if there is already a row satisfying the select constraint, only to insert it if nothing is found. The SERIALIZABLE isolation level prevents phantom reads so I think it should prevent constraint violations.
Solution 2:
If you are open to delegate this task to JDBC, you might also investigate the MERGE SQL statement, if your current database supports it. Basically, this is an upsert operation issuing an update or an insert behind the scenes. This command is much more attractive since you can still run it with even on READ_COMMITTED. The only drawback is that you can't use Hibernate for it, and only some databases support it.
If you instanciate separately a SessionFactory for the old code and an EntityManagerFactory for new code, that can lead to different value in first level cache. If during a single Http request, you change a value in old code, but do not immediately commit, the value will be changed in session cache, but it will not be available for new code until it is commited. Independentely of any transaction or database locking that would protect persistent values, that mix of two different Hibernate session can give weird things for in memory values.
I admit that the injected EntityManager still uses Hibernate. IMHO the most robust solution is to get the EntityManagerFactory for the PersistenceUnit and cast it to an Hibernate EntityManagerFactoryImpl. Then you can directly access the the underlying SessionFactory :
SessionFactory sessionFactory = entityManagerFactory.getSessionFactory();
You can then safely use this SessionFactory in your old code, because now it is unique in your application and shared between old and new code.
You still have to deal with the problem of session creation-close and transaction management. I suppose it is allready implemented in old code. Without knowing more, I think that you should port it to JPA, because I am pretty sure that if an EntityManager exists, sessionFactory.getCurrentSession() will give its underlying Session but I cannot affirm anything for the opposite.
I've run into a similar problem when I had a list of enumerated lookup values, where two pieces of code would check for the existence of a given value in the list, and if it didn't exist the code would create a new entry in the database. When both of them came across the same non-existent value, they'd both try to create a new one and one would have its transaction rolled back (throwing away a bunch of other work we'd done in the transaction).
Our solution was to create those lookup values in a separate transaction that committed immediately; if that transaction succeeded, then we knew we could use that object, and if it failed, then we knew we simply needed to perform a get to retrieve the one saved by another process. Once we had a lookup object that we knew was safe to use in our session, we could happily do the rest of the DB modifications without risking the transaction being rolled back.
It's hard to know from your description whether your data model would lend itself to a similar approach, where you'd at least commit the initial version of the entity right away, and then once you're sure you're working with a persistent object you could do the rest of the DB modifications that you knew you needed to do. But if you can find a way to make that work, it would avoid the need to share the Session between the different pieces of code (and would work even if the old and new code were running in separate JVMs).

What is the difference between CacheStoreMode USE and REFRESH

The javadocs for CacheStoreMode differentiate in a point I cannot really grasp:
The javadocs for the USE mode:
Insert/update entity data into cache when read from database and when
committed into database: this is the default behavior. Does not force
refresh of already cached items when reading from database.
The javadocs for the REFRESH mode differ in the last sentence:
Forces refresh of cache for items read from database.
When an existing cached entity instance is updated when reading from database, this would typically involve overwriting the existing data. So what is the difference between forcing and not forcing a refresh in this context?
Thank you.
As far as I know:
CacheStoreMode.USE should be used if a given EntityManagerFactory has an exclusive write-access to the underlying database thus it implies that there is no chance for an entity instance stored in the shared cache to be stale.
CacheStoreMode.REFRESH should be enabled if the underlying database might be accessed by multiple commiters (i.e. EntityManagerFactory instances, applications in different JVMs, external JDBC sources) thus an entity instance stored in the shared cache may become stale.
Since CacheStoreMode.USE does not force refresh of already cached entities when reading from the database, CacheStoreMode.REFRESH does.
I think it will make difference where the most recent updated data from the database is needed, where it gets updated from the back-end, not through the application.
In my application, its the same case (but not using any cache strategy), where we have to load all data each time; as it gets modified implicitly through messaging from the external system, else we will be dealing with the stale data.
There might be few cases where scheduled jobs, external systems etc. update database directly there CacheStoreMode.REFRESH is appropriate; while for normal case CacheStoreMode.USE.
[Other than this I can't recollect any other cases, where it might make difference between these two modes]
Edit: The documentation seems confusing & too short to explain properly. Also, in case of native queries, bulk updates etc. items are skipped & aren't cached.
CacheStoreMode.USE: Only new items are put into the cache, not for already cached ones.
CacheStoreMode.REFRESH: New items are put into the cache & already existing cached items are refreshed.

Caching de-attached Hibernate entities in external Cache

I have a set of very heavy queries whose result I want to cache into an external Cache implementation (cache the whole object list not just ids like in Hibernate's 2nd level cache).
The issue is that due to the lazy loading of several collections in the root object, once the session that queried the results is done, the objects become de-attached and the next request that tries to use the object might throw a LazyLoading exception.
Environment: Spring 4, Hibernate 4.3, Ehcache.
Is there any way to be able to re-attach the object to a new session without having it modify the underlaying DB (like with merge and update)?
There is no way to reattach a detached entity to a session just to load a lazy-initialized collection.
In order to get an updated copy of a persistent object without overwriting the session / calling merge, it's necessary to call either EntityManager.find() or do a query.
This is because the main goal of the session is to keep the database and the objects in memory in sync. Due to this there is no API for attaching new state without persisting it, as this is not in line with the main functionality of the session.
The 2nd level cache, if configured together with the query cache can solve the problem of caching the entities, queries and their associations in a much better way than any custom solution.
Everything can get cached to the point that no query hits the database. The two caches really go together, check this blog post for further info.

Categories