JpaRepository, #Transaction, and repository.saveAndFlush - java

I'm taking my first crack at the Service/Repository approach and am running into an issue. Essentially what I want to do in my Service is persist my entity and then use its ID in the same Service method.
Originally I was going to use #GeneratedValue and Sequences but gave up and settled on manually flushing the entity and grabbing the ID , which I thought would be easier.
My Repository is an Interface using Spring Data, so it has support for manual flushes. As I understand it, it also is annotated with #Transactional. My Service method is also annotated with #Transactional.
What I've found is that the entity is only persisted upon return of the Service method, even when I flush immediately after saving the entity (or use saveAndFlush). I thought that flushing would force the DB changes?

Spring-data-jpa return the "future" entity (i.e. with id) when you call save, so:
Foo foo = new Foo();
foo = this.fooRepository.save(foo); // also work on Collections
this.fooRepository.flush();
// use foo.getId();

call saveAndFlush method will work!
Foo foo = new Foo();
foo = this.fooRepository.saveAndFlush(foo);
// use foo.getId();

Related

How to use a result object from Jpa repository save() later in the same method that invoked save()?

In my controller I invoke a method from a service that will invoke save() on my database, and return me the object that was saved. That class that I am saving has a auto_generated id, so when i save it to the database, i expect to be returned with the id set (and that is working fine). In the same controller i store a result of that save() in a variable, and I found out that it's id is not set. That is because save() will actually save i to the database once that transaction is completed (when i exit the controller method. My problem is that i want to use that result before i exit my controller in a different service. How can I force the service (and consenquetly the repositroy) to save it immidiatelly and return me the result.
The reason for using id of classA in classB , is because classB is a "conncetion" table between two tables in database, and I should update it only when certain conditions are met, but this is of the point. I have already tried saveAndFlush() method in repository, that this service is calling, but it doesn't help. My service is only doing calling a save(), or saveAndFlush() method and nothing else (so it can't be a problem in the service).
I have already tried #Transactional annotation with REQUIRES_NEW, but it isn't working.
#PostMapping("")
public ClassA createClassA(#RequestBody ClassA classA){
ClassA a = classAService.saveClassA (classA);
System.out.println("Id = " + a.getIdClassA());
classBService.saveClassB(new classB(a.getId()); //it will cause an exception if a.getId() returns 0
return classA;
}
System.out.println will print out Id=0, but should print out Id=(some number that database makes, and cannot be zero because database has AUTO INCREMENT)
I have already tested all other services, repositories, connections etc. I am just interested how to force a response to come immidatelly so it can be stored in a variable and used later in the method.
Well, thank you for the comments, #JBNizet and #Lebecca you were both right :). Indeed saveAndFlush() would reslove my problem if I had told my class that id will be generated in the database. Thats why the soultion is to put something like #GeneratedValue(strategy = GenerationType.IDENTITY), and to add saveAndFlush(). It worked after these two steps.

How to flush Spring Data JPA before calling stored procedure?

So I have JPA Entity (lets say Foo) for which there's FooRepository defined as extension of CrudRepository<Foo, Long>. Repository has few custom methods and among them there is a method (let's say initFoo) that maps to stored procedure with #Procedure annotation. Now in service layer there is a method that looks pretty much like this (heavy oversimplification):
Foo f = new Foo();
f.setId(5)
f.setName("Bar");
FooRepository.save(f);
FooRepository.initFoo(f.getId());
Calling this method results in an error from stored procedure. Upon close inspection (constraint violation: key foo_id=5 does not exist) it appears, that entity Foo doesn't end up in database right after FooRepository.save() completes. Most probably Entity Manager decides there is no rush and keeps the entity in memory/cache.
The question is: how to convince EM to flush that particular entity to db? I'd like to avoid wiring up EntityManager in service layer and calling flush() directly. I've tried annotating stored procedure method with #Modifying, but it appears it only works with #Query methods. Any sane way to have such issue resolved?
Spring Boot (with spring-boot-starter-data-jpa) 1.3.3.RELEASE
Instead of using CrudRepsitory you can use JpaRepository which contains method saveAndFlush()

SpringData/Hibernate #ManyToOne cascading automatically when it should not

Given two entities like so
#Entity
public class Thing {
#ManyToOne
private ThingType thingType;
...
}
#Entity
public class ThingType {
private String name;
...
}
From everything I have read, the default cascade should be nothing so if I get a reference to a Thing thing, and change the name field of its ThingType, then using a JpaRepository<Thing, Long> call thingRepo.save(thing), I would expect the change to ThingTypes name not to be persisted.
However this is not the case and the change is persisted. I am not sure why this is happening? What I am missing here?
Relevant versions:
org.springframework.boot:spring-boot:jar:1.5.7.RELEASE
org.hibernate:hibernate-core:jar:5.0.12.Final
org.springframework.data:spring-data-jpa:jar:1.11.7.RELEASE
Well, I would have expected the same, but it seems that Hibernate has its own default behaviour. In the Hibernate forum someone asked almost the same question. The answer refers to the Hibernate "dirty check" feature, which will detect and perstst/merge the change. You might change that behaviour by using Hibernate's Cascade annotation.
Well cascading is something else, let me ask you something. Do following things:
Thing thing = session.get(Thing .class, someId);
thing.getThingType().setTitle("new title");
and nothing more, again you see hibernate updates thingType.
It is called dirty checking, as long as an entity is attached to an active hibernate session, and its persistence state changes hibernate automatically updates its associated row in database. Event without calling a save or update.
So what is cascade?
Consider following case:
Thing myThing = new Thing();
ThingType myThingType = new ThingType();
myThing.setThingType(myThingType);
session.save(myThing);
if the association cascade type is not set, then you will get an exception, because you are referencing a transient thingType object. But if you set the cascade type to persist, then hibernate first saves the thingType and then saves the thing, and everything goes fine.
So remember, if you fetch an object, then update its properties in the same session, there is no need to call a update or saveOrUpdate method on hibernate session (or jpa entityManager) because it is already in attached state, and its state is traced by hibernate.

Why didn't read JPA find() method uncommitted changes?

I am puzzled by a JPA behavior which I did not expect in that way (using Eclipselink).
I run on Wildfly 10 (JDK-8) a stateless session EJB (3.2). My method call is - per default - encapsulated in a transaction.
Now my business method, when reading and updating a entity bean, did not recognize updates - especially the version number of the entity. So my call results in a
org.eclipse.persistence.exceptions.OptimisticLockException
My code looks simplified as this:
public ItemCollection process(MyData workitem) {
....
// load document from jpa
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3
// change some data
....
manager.flush();
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 4
....
// load document from jpa once again
persistedDocument = manager.find(Document.class, id);
logger.info("#version=" + persistedDocument.getVersion());
// prints e.g. 3 (!!)
// change some data
....
manager.flush();
// Throws OptimisticLockException !!
// ...Document#1fbf7c8e] cannot be updated because it has changed or been deleted since it was last read
...
}
If I put the code (which changes the data and flush the entity bean) in a method annotated with
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
everything works as expected.
But why is the second call of the find() method in my code not reading the new version number? I would expect version 4 after the flush() and find() call.
After all it looks like calling
manager.clear();
solves the problem. I thought that detaching the object should do the same, but in my case only calling clear() did fix the problem.
More findings:
After all it seems not be a good idea to call the methods detach() and flush() form a service layer. I did this, because I wanted to get the new version id of my entity before I left my business method to return this id to the client. I changed my strategy in this case and I removed all the 'bad stuff' with detaching and flushing my entity beans. The code becomes more clear and after all the code complexity was reduced dramatically.
And of course the entityManager now behaves correctly. If I query again the same entity bean several times in one transaction, the entityManager returns the correct updated version.
So the answer to my own question is: Leave the methods flush() and clear(), as long as there is no really good reason for it to use them.

Detach an entity from JPA/EJB3 persistence context

What would be the easiest way to detach a specific JPA Entity Bean that was acquired through an EntityManager. Alternatively, could I have a query return detached objects in the first place so they would essentially act as 'read only'?
The reason why I want to do this is becuase I want to modify the data within the bean - with in my application only, but not ever have it persisted to the database. In my program, I eventually have to call flush() on the EntityManager, which would persist all changes from attached entities to the underyling database, but I want to exclude specific objects.
(may be too late to answer, but can be useful for others)
I'm developing my first system with JPA right now. Unfortunately I'm faced with this problem when this system is almost complete.
Simply put. Use Hibernate, or wait for JPA 2.0.
In Hibernate, you can use 'session.evict(object)' to remove one object from session. In JPA 2.0, in draft right now, there is the 'EntityManager.detach(object)' method to detach one object from persistence context.
No matter which JPA implementation you use, Just use entityManager.detach(object) it's now in JPA 2.0 and part of JEE6.
If you need to detach an object from the EntityManager and you are using Hibernate as your underlying ORM layer you can get access to the Hibernate Session object and use the Session.evict(Object) method that Mauricio Kanada mentioned above.
public void detach(Object entity) {
org.hibernate.Session session = (Session) entityManager.getDelegate();
session.evict(entity);
}
Of course this would break if you switched to another ORM provider but I think this is preferably to trying to make a deep copy.
Unfortunately, there's no way to disconnect one object from the entity manager in the current JPA implementation, AFAIR.
EntityManager.clear() will disconnect all the JPA objects, so that might not be an appropriate solution in all the cases, if you have other objects you do plan to keep connected.
So your best bet would be to clone the objects and pass the clones to the code that changes the objects. Since primitive and immutable object fields are taken care of by the default cloning mechanism in a proper way, you won't have to write a lot of plumbing code (apart from deep cloning any aggregated structures you might have).
As far as I know, the only direct ways to do it are:
Commit the txn - Probably not a reasonable option
Clear the Persistence Context - EntityManager.clear() - This is brutal, but would clear it out
Copy the object - Most of the time your JPA objects are serializable, so this should be easy (if not particularly efficient).
If using EclipseLink you also have the options,
Use the Query hint, eclipselink.maintain-cache"="false - all returned objects will be detached.
Use the EclipseLink JpaEntityManager copy() API to copy the object to the desired depth.
If there aren't too many properties in the bean, you might just create a new instance and set all of its properties manually from the persisted bean.
This could be implemented as a copy constructor, for example:
public Thing(Thing oldBean) {
this.setPropertyOne(oldBean.getPropertyOne());
// and so on
}
Then:
Thing newBean = new Thing(oldBean);
this is quick and dirty, but you can also serialize and deserialize the object.
Since I am using SEAM and JPA 1.0 and my system has a fuctinality that needs to log all fields changes, i have created an value object or data transfer object if same fields of the entity that needs to be logged. The constructor of the new pojo is:
public DocumentoAntigoDTO(Documento documentoAtual) {
Method[] metodosDocumento = Documento.class.getMethods();
for(Method metodo:metodosDocumento){
if(metodo.getName().contains("get")){
try {
Object resultadoInvoke = metodo.invoke(documentoAtual,null);
Method[] metodosDocumentoAntigo = DocumentoAntigoDTO.class.getMethods();
for(Method metodoAntigo : metodosDocumentoAntigo){
String metodSetName = "set" + metodo.getName().substring(3);
if(metodoAntigo.getName().equals(metodSetName)){
metodoAntigo.invoke(this, resultadoInvoke);
}
}
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
}
}
In JPA 1.0 (tested using EclipseLink) you could retrieve the entity outside of a transaction. For example, with container managed transactions you could do:
public MyEntity myMethod(long id) {
final MyEntity myEntity = retrieve(id);
// myEntity is detached here
}
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public MyEntity retrieve(long id) {
return entityManager.find(MyEntity.class, id);
}
Do deal with a similar case I have created a DTO object that extends the persistent entity object as follows:
class MyEntity
{
public static class MyEntityDO extends MyEntity {}
}
Finally, an scalar query will retrieve the desired non managed attributes:
(Hibernate) select p.id, p.name from MyEntity P
(JPA) select new MyEntity(p.id, p.name) from myEntity P
If you get here because you actually want to pass an entity across a remote boundary then you just put some code in to fool the hibernazi.
for(RssItem i : result.getChannel().getItem()){
}
Cloneable wont work because it actually copies the PersistantBag across.
And forget about using serializable and bytearray streams and piped streams. creating threads to avoid deadlocks kills the entire concept.
I think there is a way to evict a single entity from EntityManager by calling this
EntityManagerFactory emf;
emf.getCache().evict(Entity);
This will remove particular entity from cache.
Im using entityManager.detach(returnObject);
which worked for me.
I think you can also use method EntityManager.refresh(Object o) if primary key of the entity has not been changed. This method will restore original state of the entity.

Categories