Good day everyone,I have Java application that use JPA (EclipseLink) for database access-some CRUD operation.How i can make synhronization in that case ? I mean if two users User1 and User2 start application on they machine and User1 change some records how to make User2 see it ? Is there some opportunity make User2 know that User1 change record and update only that record ?. The same problems has discussed here How to synchronize multiple clients with a shared database (JPA)?. Updated data (in Database) is not visible via JPA/Eclipselink
But the only what there suggest is to update by timer. Is it common way to do such things ?
Thank you for your help
[EDIT]
Monitor MySQL inserts from different application How to make a database listener with java? change notification on domain objects (Hibernate/Java)
Show me direction in resolving my problem .Hope can help somebody.
You should create a new entity Manager instance for each transaction. I suggest to use spring with a JTA Transaction manager and let the container manage the entity manager scope.
See http://spring.io/blog/2011/08/15/configuring-spring-and-jta-without-full-java-ee/
[edit]
Note that if there is a refreh(someEntity) method on the EntityManager, there is no refreshAll() method. This is because the EM is not designed to last a long time and be refreshed.
If you let the container (Spring is advised for a standalone app) manage the persistence context (container managed entityManager), it will instantiate a new EM for each transaction. In other words, each time you invoke a method annotated with #transactional annotation, a new EM will be instantiated for the lifecycle of the method.
In this case you don't need to take care about data synchronization, each time you want the grid to be refreshed you recall the transactional getMyEntityList() method which will retrieve a new fresh set of entities to display in the grid. You can of course use a timer to trigger the refresh.
The trick is to never let unpersisted modification in memory. Each time a user update the grid, open a new transaction and persist the modification, each time you refresh, retrieve a new up-to-date persistence context and let the GC dispose the old unreferenced entities.
If you don't want user1 to be able to override user2 data, configure optimistic locking.
Otherwise if you absolutely want to maintain an application scoped EM for performance reason (avoiding to regularly retrieve data for DB), you can set up a messaging topic for the different application instance to notify each others in case of data update, but this gonna lead to additional work and constraints.
Related
I have a common database that is used by two different applications (different technologies, different deployment servers, they just use the same database).
Let's call them application #1 and application #2.
Suppose we have the following scenario:
the database contains a table called items (doesn't matter its content)
application #2 is developed in Spring Boot and it is mainly used just for reading data from the database
application #2 retrieves an item from the database
application #1 changes that item
application #2 retrieves the same item again, but the changes are not visible
What I understood by reading a lot of articles:
when application #2 retrieves the item, Hibernate stores it in the first level cache
the changes that are done to the item by application #1 are external changes and Hibernate is unaware of them, and thus, the cache is not updated (same happens when you do a manual change in the database)
you cannot disable Hibernate's first level cache.
So, my question is, can you force Hibernate into refreshing the entities every time they are read (or make it go into the database) without explicitly calling em.refresh(entity)? The problem is that the business logic module from application1 is used as a dependency in application1 so I can only call service methods (i.e. I don't have access to the entityManager or session references).
Hibernate L1 cache is roughly equivalent to a DB transaction when you run in a repeatable-read level isolation. Basically, if you read/write some data, the next time you query in the context of the same session, you will get the same data. Further, within the same process, sessions run independent of each other, which means 2 session are looking at different data in the L1 cache.
If you use repeatable read or less, then you shouldn't really be concerned about the L1 cache, as you might run into this scenario regardless of the ORM (or no ORM).
I think you only need to think about the L2 cache here. The L2 cache is what stores data and assumes only hibernate is accessing the DB, which means that if some change happens in the DB, hibernate might not know about it. If you just disable the L2 cache, you are sorted.
Further reading - Short description of hibernate cache levels
Well, if you cannot access hibernate session you are left with nothing. Any operations you want to do requires session access. For instance you can remove entity from cache after reading it like this:
session.evict(entity);
or this
session.clear();
but first and foremost you need a session. Since you calling only services you need to create service endpoints clearing session cache after serving them or modify existing endpoints to do that.
You can try to use StatelessSession, but you will lose cascading and other things.
https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#_statelesssession
https://stackoverflow.com/a/48978736/3405171
You can force to start a new transaction, so in this manner hibernate will not be read from the cache and it will redo the read from the db.
You can annotate your function in this manner
#Transactional(readOnly = true, propagation = Propagation.REQUIRES_NEW)
Requesting a new transaction, the system will generation a new hibernate session, so the data will not be in the cache.
I have read that the session.get(Employee.class, new Long(1)) method will take the data from cache or database.
If there are two users who are accessing the application concurrently.
if user - > User1 is doing get then data will be retrieved from DB. Now data is moved to cache.
If user - > User2 has deleted the record or updated the record. then
If user - > User1 is doing get then data will it be retrieved from cache.
Isnt User1 is getting old data. Does it falls to pitfall of caching.
Or am I missing something here?
I can say on this that why User1 is doing 2 times session.get in the same session. But still I need different opinions.
You understand it correctly: the cache is bound to the session, and if an object is loaded into the first-level cache, then no SQL will executed with #get(). You could use #evict() to clear one object from the cache, or #clear() to clear every object from the cache, without closing the session. Closing the session will always delete the entire cache.
See a nice explanation here.
You need to read more about Container-managed entity manager
The most common and widely used entity manager in a Java EE
environment is the container-managed entity manager. In this mode, the
container is responsible for the opening and closing of the entity
manager (this is transparent to the application). It is also
responsible for transaction boundaries. A container-managed entity
manager is obtained in an application through dependency injection or
through JNDI lookup, A container-managed entity manger requires the
use of a JTA transaction.
It's responable of what do you want understand and archive and how is used it.
More doucmentation Entity Mananger
No, because Hibernate saves data on cache, but whether you update the data with Hibernate it will know that some change exists. You will have troubles if you update the data with SQL or from other point where Hibernate cannot see that something happends.
we are developing an (JavaSE-) application which communicates to many clients via persistent tcp-connections. The client connects, performs some/many operations (which are updated to a SQL-Database) and closes the application / disconnects from server. We're using Hibernate-JPA and manage the EntityManager-lifecycle on our own, using a ThreadLocal-variable. Actually we create a new EntityManager-instance on every client-request which works fine so far. Recently we profiled a bit and we found out that hibernate performs a SELECT-query to the DB before every UPDATE-statement. That is because our entities are in detached-state and every new EntityManager attaches the entity to the persistence context first. This leads to a massive SQL-overhead when the server is under load (because we have an write-heavy application)and we try to eliminate that leak.
First, we thought about 2nd-Level-Cache. However, we discovered that hibernate invalidates it's Query- and Collection-Caches whenever a new item is added or removed.
On second thought, we evaluate whether to keep an EntityManager up as long as the client is logged in on the server. But I wonder if this is a "best practice", because there are some drawbacks: thread-safety, managing-overhead of the EntityManager-instances, etc.
In short: we are looking for a way to get rid of those SELECT-statements before every UPDATE. Any ideas out there?
One possible way to get rid of select statements when reattaching detached entities is to use Hibernate-specific update() operation instead of merge().
update() unconditionally runs an update SQL statement and makes detached object persistent. If persistent object with the same identifier already exists in a session, it throws an exception. Thus, it's a good choice when you are sure that:
Detached object contains modified state that should be saved in the database
Saving that state is the main goal of opening a session for that request (i.e. there were no other operations that loaded entity with the same id in that session)
In JPA 2.0 you can access Hibernate-specific operations as follows:
em.unwrap(Session.class).update(o);
See also:
11.6. Modifying detached objects
One possible option would be to use StatelessSession for the update statements. I've successfully used it in my 'write-heavy' application.
Describe please a typical lifecycle of a Hibernate object (that maps to a db table) in a web app.
Suppose, you create a new instance of an object and persist in the db.
But during the app lifetime you'll be working on a detached object and finally
you need to update it in the database, for example on exit.
How does it look like with hibernate and spring?
p.s. Can transactions and sessions live between servlet transitions? So that we opened 1 session and use it in all servlets without a need to reopen it?
I'll try to give a descriptive example.
Suppose, when the app starts, the log record is created. this can be done at once,
Log log = new Log(...) and then something like save(log) -- log corresponds to a table LOG.
then, as the application processes user inputs and keeps going, new data is being accumulated.
and after the second step we could add something to a log object, a collection for example:
// now we have a tracking of what user chosen: Set thisUserChoice,
// so we can update the persistent object, we have new data now !
// log.userChoices = thisUserChoice.
Here occurs the nature of my question. How are we supposed to deal with it, if we want to
update the database whenever new data is gotten from a user?
In a relational model we can work with a row id, so we could get this record and update some other data of the row.
In Hibernate we are also able to load a object by its id.
But is IT THE WAY TO GO? IS ANYTHING BETTER?
You could do everything in a single session. But that's like doing everything in a single class. It could make sense from a beginner's point of view, but nobody does it like that in practice.
In a web app, you can normally expect to have several threads running at once, each dealing with a different user. Each thread would typically have a separate session, and the session would only have managed instances of the objects that were actually needed by that user. It's not that you can completely ignore concurrency in your own code, but it's useful to have hibernate's help. If you were to do everything with one session, you would have to do all the concurrency management yourself.
Hibernate can also manage the concurrency if you have multiple application servers talking to a single database. The separate JVMs can't possibly share the same session in this case...
The lifecycle is described in the hibernate documentation (which I'm sure you've seen).
Whenever a request comes from the web client to the server, the first thing you should do is load the relevant objects (see section 10.3) so that you have persistent, not detached entities to deal with. Then, you do whatever operations are required. When the session closes (ie. when the server returns the response to the client), it will write any updates to the database. Or, if your operation involves creating new entities, you'll have to create transient ones (with new) and then call persist() or save() (see section 10.2). That will result in a managed entity -- you can make more changes to it, and hibernate will record those changes when the session closes.
I try to avoid using detached objects. But if I have to (perhaps they're stored in the user's session), then whenever they might need to be saved to the database, you'll have to use update() (see section 10.6). This converts it into a managed object, and so the session will save any changes to the database when it's closed.
Spring makes it very easy to generate a new session for each request. You would normally tell Spring to create a sessionFactory, and then every request will be given its own session. Search for "spring hibernate tutorial" and you'll find several examples.
http://scbcd.blogspot.com/2007/01/hibernate-persistence-lifecycle.html This explains transient, persistent objects.
Also have a look at the Lifecycle interface to know what hibernate does (and it provides hooks at all stages for user to do something)
I have an entity that has a state table associated with it. The state table is managed by another process, and contains a list of objects that my business logic must process. I would like to get a new snapshot of the state table each time I reload the entity. How can I ensure that no part of Hibernate or its support libraries ever caches any of the values of this table? Basically, I want to get a new view of the collection every time I call getMyStateValues ().
Most of the point of Hibernate is to prevent that from happening and return a consistent view of an entity's state in the scope of a given transaction. So either reload the whole entity, in different transactions every time. Or, if you need to reload the state table during a business transaction, load only the state table by the parent entity's id in a separate Hibernate session.
You can create a method in your entity that queries the database and return the collection. getXYXReload(). It´s not a very nice design decision, thought.
You can use Hibernate's CacheMode. It allows you to instruct a hibernate session on how to interact with the cache. You can get access to the underlying session with:
#PersistenceContext EntityManager manager;
...
org.hibernate.Session session = (Session)manager.getDelegate();
Unfortunately, this technique applies to the whole session, and not specifically to an entity.