I have a method that receives a JPA Entityand its related EntityManager as parameters. The Entity instance is not created inside the class, and it might very well be shared by other classes (like GUIs and such).
The method starts a transaction, carries out some changes on the entity, and finally commits the transaction.
In case the commit fails, EntityTransaction.rollback() is called: in accordance with JPA specifications, the entity is then detached from the manager.
In case of failure the application needs to discard the pending changes, restore the original values inside the entity e and re-attach it to the EntityManager, so that the various scattered references to the e object would remain valid. The problem raises here: what I understood is that this is not a straightforward operation using the EntityManager's APIs:
calling EntityManager.refresh(e) is not possible since e is detached.
doing e = EntityManager.merge(e) would create a new instance for e: all the other references to the original e in the program at runtime would not be updated to the new instance. This is the main issue.
moreover (actually not quite sure about this), EntityManager.merge(e) would update the new managed instance's values with the values currently held by e (i.e., the values that probably caused the commit to fail). Instead, what I need is to reset them.
Sample code:
public void method(EntityManager em, Entity e) {
EntityTransaction et = em.getTransaction();
et.begin();
...
// apply some modifications to the entity's fields
...
try {
et.commit();
} catch (Exception e) {
et.rollback();
// now that 'e' is detached from the EntityManager, how can I:
// - refresh 'e', discarding all pending changes
// - without instantiating a copy (i.e. without using merge())
// - reattach it
}
}
What is the best approach in this case?
A possible solution would be like:
public class YourClass {
private EntityManager em = ...; // local entity manager
public void method(Entity e) { // only entity given here
Entity localEntity = em.find(Entity.class, e.getId());
EntityTransaction et = em.getTransaction();
et.begin();
...
// apply some modifications to the local entity's fields
applyChanges(localEntity);
...
try {
et.commit();
// Changes were successfully commited to the database. Also apply
// the changes on the original entity, so they are visible in GUI.
applyChanges(e);
} catch (Exception ex) {
et.rollback();
// original entity e remains unchanged
}
}
private void applyChanges(Entity e) {
...
}
}
Related
For a simple batch update of a MariaDB table, properly mapped as a Hibernate entity class, a simple update via Hibernate produces the error
org.hibernate.StaleStateException: Batch update returned unexpected row count from update
Each table record is modeled by an Entity class, which is a simple POJO that needs to be updated (if it already exists) or inserted as a new object (if it does not exist in the table), with a primary id field (not auto-incremented) and some other values, all scalar. The error can be reproduced by the following method.
public static void update(Set<Long> ids) {
Session session = createSession();
Transaction t = session.beginTransaction();
try {
for (Long id : ids) {
Entity entity = session.get(Entity.class, id);
if (entity == null) {
entity = new Entity();
}
entity.setId(id);
// Other entity value settings
session.saveOrUpdate(entity);
}
transaction.commit();
} catch (Exception e) {
transaction.rollback();
} finally {
session.close();
}
}
What is the correct way of implementing the above operation in Hibernate?
You are using saveOrUpdate() in this way Hibernate decides by his own logic what is a new (Transient) and what is an old (Persisted) object and depends on this performs save() or update() method accordingly.
Hibernate assumes that an instance is an unsaved transient instance if:
The identifier property is null.
The version or timestamp property (if it exists) is null.
A new instance of the same persistent class, created by Hibernate internally, has the same database identifier values as the given instance.
You supply an unsaved-value in the mapping document for the class, and the value of the identifier property matches. The unsaved-value attribute is also available for version and timestamp mapping elements.
Entity data with the same identifier value isn't in the second-level cache.
You supply an implementation or org.hibernate.Interceptor and return Boolean.TRUE from Interceptor.isUnsaved() after checking the instance in your code.
Otherwise: entity will be determined like already saved persisted
In your example, Hibernate did not determine the new (Transient) object and as result, perform update() method for it. It produced UPDATE instead of INSERT statement. UPDATE statement for not existing record returns zero updated records, so it is the reason for your exception.
Solution: explicitly use save() method for new entities:
public void update(Set<Long> ids) {
Session session = getSessionFactory().openSession();
Transaction transaction = session.beginTransaction();
try {
for (Long id : ids) {
HibernateEntity entity = session.get(HibernateEntity.class, id);
if (entity == null) {
entity = new HibernateEntity();
}
// Other entity value settings
entity.setValue(entity.getValue() + "modification");
if (entity.getId() == null) {
entity.setId(id);
session.save(entity);
}
}
transaction.commit();
} catch (Exception e) {
transaction.rollback();
} finally {
session.close();
}
}
update() method is not required to call explicitly. Transactional persistent instances (i.e. objects loaded, saved, created or queried by the Session) can be manipulated by the application, and any changes to persistent state will be persisted when the Session is flushed. According to documentation.
I want to test that my controller endpoint returns an appropriate error code when trying to delete a record with referencing child records. In my integration test, I need to set up the state so that the related records exist, then invoke the deletion endpoint, expect the error condition, and then (ideally) roll the entire DB back to the state it was in before the test.
e.g.
INSERT INTO parent_rec (id) VALUES ("foo");
INSERT INTO child_rec (id, parent_id) VALUES ("bar", "foo");
COMMIT;
DELETE FROM parent_rec WHERE id = "foo"; -- bang!
#PersistenceContext
EntityManager em;
#Transactional
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
However, I'm running into issues. If I put the #Transactional annotation at the method or class level, the records aren't persisted until after the deletion is attempted so the deletion returns a 200 OK rather than a 400 Bad Request or similar.
The current solution is for the tests to be run in order (with a previous test setting up records which a subsequent test tries to operate on). However, this makes the tests pretty brittle and dependent on each other, which I'd like to avoid primarily to make changing the code easier.
Can I accomplish what I want without using an additional layer of tooling? In the past, I'd have used DBUnit to do something like this, but if I can avoid adding the additional dependency I'd prefer to keep it simple.
In JEE I solved these issues kind of simply by splitting my code into two parts:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public class ParentRecordTestFacade {
public void create() {
// Create record here
}
public void delete() {
// Delete record here
}
}
and then call both methods in the actual unit test one after another.
Running only some code in a separate transaction also comes in handy. You can achieve it for example by creating a method fo the block of code to invoke in transaction:
protected <T> T getInsideTransaction(Function<EntityManager, T> transactional) {
EntityManager em = null;
EntityTransaction trx = null;
try {
em = entityManagerFactory.createEntityManager();
trx = em.getTransaction();
trx.begin();
return transactional.apply(em);
} catch (Throwable throwable) {
throw throwable;
} finally {
if (trx != null) {
if (!trx.getRollbackOnly()) {
trx.commit();
} else {
trx.rollback();
}
}
if (em != null) {
em.close();
}
}
}
Now you can invoke it like that:
void testDelete() {
// Set up records
getInsideTransaction(em -> {
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
}
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
You can invoke an arbitrary block of code within separate transaction that way.
In spring especially for test such cases in repository layer I using, looks like should works and for you - org.springframework.test.context.transaction.TestTransaction. Pay attention on #Commit annotation on test method, otherwise your record will not be saved.
#Commit
void testDelete() {
// Set up records
ParentRecord record = new ParentRecord("foo");
em.persist(record);
em.persist(new ChildRecord("bar", record));
TestTransaction.end()
TestTransaction.start()
//delete
mockMvc.perform(delete("/parent/foo")).andExpect(/* some error code */);
}
But of course after commit you should delete manually you record.
I have an issue with a webapp running in tomcat where I have an abstract DAO class with a method called all() which returns all entities from the database or JPA cache. It seems to correctly return the entities on the initial call but subsequent calls don't reflect updates happening from separate UI calls which will use the entity managers find method to find the specific entity from the list, update the relative fields and commit that. When I view that list via the same all() method later I still see the original values. If I make another update in the logs I can see the value changing from the correct value(not the original value) to the updated value and the logs shows those updates happening correctly each time.
I'm using guice for injection. I've played around with the logging and can see the same hashcode on the entity manager being used throughout a request but different for each request. I've played with the following the persistance.xml file which didn't seem to help either...
<property name="eclipselink.cache.shared.default" value="false" />
<shared-cache-mode>NONE</shared-cache-mode>
I can't see why my all() won't return updated results, I've also tried adding code to find the specific entity I'm updating in the list then replaced it by calling the following...
entity = em.find(Class.class, id)
This seemed to fix the issue on that particular entity so it appears my query is reusing old.
Here's a snippet from my DAO class
private final Provider<EntityManager> emP;
protected EntityManager em(boolean useTransaction) throws DaoException {
return useTransaction ? begin() : emP.get();
}
public List<T> all() throws DaoException {
EntityManager em = em(false);
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<T> cq = cb.createQuery(eClass);
Root<T> from = cq.from(eClass);
return em.createQuery(cq.select(from)).getResultList();
}
public T save(T t) throws DaoException {
return save(t, em(true));
}
protected T save(T t, EntityManager em) throws DaoException {
if (Objects.isNull(t)) {
throw new DaoException("can't save null object: " + getDaoClass(), new NullPointerException());
}
T saved;
if (t.getId() > 0) {
saved = em.merge(t);
} else {
em.persist(t);
saved = t;
}
autoCommit();
return saved;
}
protected void autoCommit() throws DaoException {
if (autoCommit) {
commit();
}
}
public void commit() throws DaoException {
EntityManager em = emP.get();
if (!em.getTransaction().isActive()) {
throw new DaoException("transaction isn't active, unable to commit");
}
try {
em.getTransaction().commit();
} catch (IllegalStateException e) {
throw new DaoException("transaction not active", e);
} catch (RollbackException e) {
throw new DaoException("commit rolled back", e);
}
}
So I'm wondering if anyone has any insights on why this might be happening or have any suggestions on what else I can try to check?
So I found the cause of the issue I was having. I was using the ElementCollection annotation in my entities when referencing lists. I removed the annotation and replaced it with a JoinTable and OneToMany annotations and things are working correctly.
The issue I had was that the entity would be stored in the database fine and I was updating that entity as expected but JPA had embedded the list of entities where it was referenced.
So I was seeing the embedded list returned each time which was not the actual entity I had updated. My entities are now using proper join tables instead of embedded objects and everything is behaving as expected.
In my application, client has a copy of the persisted entities, stored in a collection so to minimize database transactions. Since its a multi user system, another user might be viewing the same object lets say of a task entity with you. Suppose the second user removes the task from the database while you are viewing the task, and you decide to remove it too. When you try to remove it, I get a StackOverflowError, and of course removal is not executed (since task is allready removed). Is there a way to catch this using database, jpa or hibernate exception? I am using entitymanager obects to remove an entity.
public <T> void remove(T entity) throws PersistenceException{
log.debug("Removing entity of type " + entity.getClass().getName());
// TODO add exception handling
EntityManager em = createEntityManager();
em.getTransaction().begin();
em.remove(em.merge(entity));
em.getTransaction().commit();
em.close();
}
What you have here is an optimistic locking problem. It's not only removal that will be troublesome. There may be two or more people editing the same entity as well (or one will edit an entity, another will remove it, what should be the end result?).
In your case, before removing the entity, you need to first load it in your transaction. If it's not found, someone has already removed it. Otherwise you can remove it safely.
If you are looking for catching the exception; then you may try like below--
public <T> void remove(T entity) throws PersistenceException{
log.debug("Removing entity of type " + entity.getClass().getName());
EntityManager em =null;
EntityTransaction et=null;
try{
em = createEntityManager();
et=em.getTransaction();
if(!et.isActive()){ // you should have new txn always. else throw expception
et.begin();
.... // your remove logic
et.commit();
em.close();
} else{
et.setRollbackOnly();
throw new RunTimeException(..); //optional throw
}
}catch(Exception e) {
et.rollback();
throw new RunTimeException(...)..; //optional throw
}
I wonder if anyone has come across this error and can explain what's happening:
<openjpa-2.1.1-SNAPSHOT-r422266:1087028 nonfatal user error>
org.apache.openjpa.persistence.InvalidStateException:
Primary key field com.qbe.config.bean.QBEPropertyHistory.id of com.qbe.config.bean.QBEPropertyHistory#1c710ab has non-default value.
The instance life cycle is in PNewProvisionalState state and hence an
existing non-default value for the identity field is not permitted.
You either need to remove the #GeneratedValue annotation or modify the
code to remove the initializer processing.
I have two objects, Property and PropertyHistory. Property has OneToMany List of PropertyHistory:
#OneToMany(fetch = FetchType.LAZY, cascade=CascadeType.MERGE, orphanRemoval=false)
#JoinColumn(name="PROPERTY_NAME")
#OrderBy("updatedTime DESC")
private List<QBEPropertyHistory> history = new ArrayList<QBEPropertyHistory>();
And Property object is loaded and saved like this:
public T find(Object id) {
T t = null;
synchronized(this) {
EntityManager em = getEm();
t = em.find(type, id);
//em.close(); //If this is uncommented, fetch=LAZY doesn't work. And fetch=EAGER is too slow.
}
return t;
}
public T update(T t) {
synchronized(this) {
EntityManager em = getEm();
em.getTransaction().begin();
t = em.merge(t);
em.getTransaction().commit();
em.close();
return t;
}
}
In the service layer I load a property using find(id) method, instantiate a new PropertyHistory, add it into property prop.getHistory().add(propHist) then call update(prop) and get the above error.
The error disappears if I close EntityManager in find() but that breaks lazy loading and prop.getHistory() always returns null. If I set fetch=EAGER it becomes unacceptably slow as there are 10s of 1000s of records and I need to select thousands of property objects at a time and history is not needed 99.99% of the time.
I can't remove the #GeneratedValue as the error text suggests because it is generated (DB2, autoincrement). Now I wonder how would i "modify the code to remove the initializer processing" ?
Thanks!
The problem is that you are trying to share an Entity across persistence contexts(EntityManager). You could change your methods to take an EntityManager instance and use the same EM for the find and update operations.