I'm writing an application (a CMS) using JPA/Hibernate, and in a single UI i have multiple components that might show the same entity (each one will show only a portion). I also have multiple UIs, in multiple sessions for multiple users.
Some of these components can also edit the entity, and all others component should always show the up-to-date entity.
A gross way would be a periodic refresh, but this is laggy and heavy, so i come up with a synchronization mechanism for jpa.
Using an interceptor (the hibernate one, since the jpa one is dumb) i can listen all the transactions, all the new/updated/removed entity and send notifications to every component interested in.
I also can catch derived transactions, meaning that if a component, responding to a notification, modify in any way the entity (opening a new transaction) i can resend the notification (only a delta).
This is becouse a component may modify the entity in a such way that another component may need to modify again it. (just a stupid example: a component set the birth date, and another re-calculate the age)
ps. The notifications are dispatched only after the transaction commit. This is mainly becouse
The section 3.5 of JPA specification states:
“In general, the lifecycle method of a portable application should not invoke EntityManager or Query operations, access other entity instances, or modify relationships within the same persistence context. A lifecycle callback method may modify the non-relationship state of the entity on which it is invoked.”
so the listeners would be useless if notified inside the transaction.
And (to grin and bear it) to group up modified entities and notify them all together.
This notification mechanism is growing complex, and i was wondering:
Why jpa has not such mechanism?
I'm inventing something strange?
How "real applications" solves this problem?
Related
We are developing a java program which has lots of independent components(let's say A, B, C, D). We use Hibernate framework for database mapping. Some of them working on the same JVM process and some of them are working the different JVM process. All of these components have REST endpoints and has own entity map file. Each of these components talks to each other over the TCP. There is some business logic which required a few of these components work together. And the problem starts from here. Let's say component A is called. A start a new transaction and change some data on the database then calls the component B. When component B accepted the request it opens a new transaction and changes some information on the database and commit. After finishing its work return the result to the component A. Component A start again doing some operations but failed and rollback all changes what it has done and returns the response to the back. But the changes which component B did still there. This is a little illustration of our program. What we need what is the best approaches to handle this?
Somehow we share the transaction between all the components. So when the component gets request first check is there any active transaction for this request.
The second option is using the one component to all entity and all others use this. When the request comes we open a transaction, all components do their works and when we return the response we check if there is no any exception we commit otherwise rollback.
Is there any other way? Or what are the best practices using transaction between multiple components architecture system? What are the pros and cons using a single component to keep all entity information at one point? Any post, blogs, documentation, tutorial or book are acceptable. Or if it is ok you can share your experiments.
You are looking to implement 'Distributed Transactions'.
I recommend Spring if you have a Java application as it supports distributed transactions:
https://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html
Good day everyone,I have Java application that use JPA (EclipseLink) for database access-some CRUD operation.How i can make synhronization in that case ? I mean if two users User1 and User2 start application on they machine and User1 change some records how to make User2 see it ? Is there some opportunity make User2 know that User1 change record and update only that record ?. The same problems has discussed here How to synchronize multiple clients with a shared database (JPA)?. Updated data (in Database) is not visible via JPA/Eclipselink
But the only what there suggest is to update by timer. Is it common way to do such things ?
Thank you for your help
[EDIT]
Monitor MySQL inserts from different application How to make a database listener with java? change notification on domain objects (Hibernate/Java)
Show me direction in resolving my problem .Hope can help somebody.
You should create a new entity Manager instance for each transaction. I suggest to use spring with a JTA Transaction manager and let the container manage the entity manager scope.
See http://spring.io/blog/2011/08/15/configuring-spring-and-jta-without-full-java-ee/
[edit]
Note that if there is a refreh(someEntity) method on the EntityManager, there is no refreshAll() method. This is because the EM is not designed to last a long time and be refreshed.
If you let the container (Spring is advised for a standalone app) manage the persistence context (container managed entityManager), it will instantiate a new EM for each transaction. In other words, each time you invoke a method annotated with #transactional annotation, a new EM will be instantiated for the lifecycle of the method.
In this case you don't need to take care about data synchronization, each time you want the grid to be refreshed you recall the transactional getMyEntityList() method which will retrieve a new fresh set of entities to display in the grid. You can of course use a timer to trigger the refresh.
The trick is to never let unpersisted modification in memory. Each time a user update the grid, open a new transaction and persist the modification, each time you refresh, retrieve a new up-to-date persistence context and let the GC dispose the old unreferenced entities.
If you don't want user1 to be able to override user2 data, configure optimistic locking.
Otherwise if you absolutely want to maintain an application scoped EM for performance reason (avoiding to regularly retrieve data for DB), you can set up a messaging topic for the different application instance to notify each others in case of data update, but this gonna lead to additional work and constraints.
I have a need to track he state change of a java entity class that uses MySQL as its database. I know the EntityManager has a mechanism to track state change for the entities it is managing. What I want is to access the entity state change. I want my application to fire an even to inform another application regarding the the new state of the entity. From what I gathered so far, there is no API that I can use to check the particular life-cycle an entity is in. Or is there?
Does any one have information on how to approach this?
JPA defines listener interfaces, that you can implement in order to be notified of lifecycle events for the entities.
Have a look at this article: http://www.objectdb.com/java/jpa/persistence/event
EclipseLink provides a set of events at the session level (SessionEventListener).
I assume you only want to notify of committed changes, after they have been committed? For this you can use the postCommitUnitOfWork event. The event/uow contain a UnitOfWorkChangeSet that contains the list of changes that were made in the transaction.
There appear to be two patterns to implement business transactions that span several http requests with JPA:
entity-manager-per-request with detached entities
extended persistence context
What are the respective advantages of these patterns? When should which be preferred?
So far, I came up with:
an extended persistence context guarantees that object identity is equivalent to database identity, simplifying the programming model and potentially disspelling the need to implement equals for entities
detached entities require less memory than an extended persistence context, as the persistence context also has to store the previous state of the entity for change detection
no longer referenced detached entities become eligible for garbage collection; persistent objects must first be detached explicitly
However, not having any practical experience with JPA I am sure I have missed something of importance, hence this question.
In case it matters: We intend to use JPA 2.0 backed by Hibernate 3.6.
Edit: Our view technology is JSF 2.0, in an EJB 3.1 container, with CDI and possibly Seam 3.
Well, I can enumerate challenges with trying to use extended persistence contexts in a web environment. Some things also depend on what your view technology is and if it's binding entities or view level middlemen.
EntityManagers are not threadsafe.
You don't need one per user session.
You need one per user session per
browser tab.
When an exception comes out of an
EntityManager, it is considered
invalid and needs to be closed and
replaced. If you're planning to
write your own framework extensions
for managing the extended lifecycle,
the implementation of this needs to
be bullet proof. Generally in an
EM-per-request setup the exception
goes to some kind of error page and
then loading the next page creates a
new one anyway like it always would
have.
Object equality is not going to be
100% automagically safe. As above,
an exception may have invalidated
the context an object loaded earlier
was associated with, so one fetched
now will not be equal. Making that
assumption also assumes an extremely
high level of skill and
understanding of how JPA works and
what the EM does among the
developers using it. e.g.,
accidentally using merge when it
wasn't needed will return a new
object which will not satisfy ==
with its field-identical
predecessor. (treating merge like a
SQL 'update' is an extremely common
JPA noobie 'error' particularly
because it's just a no-op most of
the time so it slides past.)
If you're using a view technology
that binds POJOs (e.g., SpringMVC)
and you're planning to bind web form
data directly onto your Entities,
you'll get in trouble quick.
Changes to an attached entity will
become persistent on the next
flush/commit, regardless of whether
they were done in a transaction or
not. Common error is, web form
comes in and binds some invalid data
onto an entity, validation fails and
trys to return a screen to inform
user. Building error screen
involves running a query. Query
triggers flush/commit of persistence
context. Changes bound to attached
entity get flushed to database,
hopefully causing SQL exception, but
maybe just persisting corrupt data.
(Problem 4 can of course also happen with session per request if the programming is sloppy, but you're not forced to actively work hard at avoiding it.)
I'm building an application using JPA 2.0 (Hibernate implementation), Spring, and Wicket. Everything works, but I'm concerned that my form behaviour is based around side effects.
As a first step, I'm using the OpenEntityManagerInViewFilter. My domain objects are fetched by a LoadableDetachableModel which performs entityManager.find() in its load method. In my forms, I wrap a CompoundPropertyModel around this model to bind the data fields.
My concern is the form submit actions. Currently my form submits pass the result of form.getModelObject() into a service method annotated with #Transactional. Because the entity inside the model is still attached to the entity manager, the #Transactional annotation is sufficient to commit the changes.
This is fine, until I have multiple forms that operate on the same entity, each of which changes a subset of the fields. And yes, they may be accessed simultaneously. I've thought of a few options, but I'd like to know any ideas I've missed and recommendations on managing this for long-term maintainability:
Fragment my entity into sub-components corresponding to the edit forms, and create a master entity linking these together into a #OneToOne relationship. Causes an ugly table design, and makes it hard to change forms later.
Detach the entity immediately it's loaded by the LoadableDetachableModel, and manually merge the correct fields in the service layer. Hard to manage lazy loading, may need specialised versions of the model for each form to ensure correct sub-entities are loaded.
Clone the entity into a local copy when creating the model for the form, then manually merge the correct fields in the service layer. Requires implementation of a lot of copy constructors / clone methods.
Use Hibernate's dynamicUpdate option to only update changed fields of the entity. Causes non-standard JPA behaviour throughout the application. Not visible in the affected code, and causes a strong tie to Hibernate implementation.
EDIT
The obvious solution is to lock the entity (i.e. row) when you load it for form binding. This would ensure that the lock-owning request reads/binds/writes cleanly, with no concurrent writes taking place in the background. It's not ideal, so you'd need to weigh up the potential performance issues (level of concurrent writes).
Beyond that, assuming you're happy with "last write wins" on your property sub-groups, then Hibernate's 'dynamicUpdate' would seem like the most sensible solution, unless your thinking of switching ORMs anytime soon. I find it strange that JPA seemingly doesn't offer anything that allows you to only update the dirty fields, and find it likely that it will in the future.
Additional (my original answer)
Orthogonal to this is how to ensure you have a transaction open when when your Model loads an entity for form binding. The concern being that the entities properties are updated at that point and outside of transaction this leaves a JPA entity in an uncertain state.
The obvious answer, as Adrian says in his comment, is to use a traditional transaction-per-request filter. This guarantees that all operations within the request occur in single transaction. It will, however, definitely use a DB connection on every request.
There's a more elegant solution, with code, here. The technique is to lazily instantiate the entitymanager and begin the transaction only when required (i.e. when the first EntityModel.getObject() call happens). If there is a transaction open at the end of the request cycle, it is committed. The benefit of this is that there are never any wasted DB connections.
The implementation given uses the wicket RequestCycle object (note this is slightly different in v1.5 onwards), but the whole implementation is in fact fairly general, so and you could use it (for example) outwith wicket via a servlet Filter.
After some experiments I've come up with an answer. Thanks to #artbristol, who pointed me in the right direction.
I have set a rule in my architecture: DAO save methods must only be called to save detached entities. If the entity is attached, the DAO throws an IllegalStateException. This helped track down any code that was modifying entities outside a transaction.
Next, I modified my LoadableDetachableModel to have two variants. The classic variant, for use in read-only data views, returns the entity from JPA, which will support lazy loading. The second variant, for use in form binding, uses Dozer to create a local copy.
I have extended my base DAO to have two save variants. One saves the entire object using merge, and the other uses Apache Beanutils to copy a list of properties.
This at least avoids repetitive code. The downsides are the requirement to configure Dozer so that it doesn't pull in the entire database by following lazy loaded references, and having yet more code that refers to properties by name, throwing away type safety.