I am having quite complex methods which create different entities during its execution and use them. For instance, I create some images and then I add them to an article:
#Transactional
public void createArticle() {
List<Image> images = ...
for (int i = 0; i < 10; i++) {
// creating some new images, method annotated #Transactional
images.add(repository.createImage(...));
}
Article article = getArticle();
article.addImages(images);
em.merge(article);
}
This correctly works – images have their IDs and then they are added to the article. The problem is that during this execution the database is locked and nothing can be modified. This is very unconvinient because images might be processed by some graphic processor and it might take some time.
So we might try to remove the #Transactional from the main method. This could be good.
What happens is that images are correctly created and have their ID. But once I try to add them to article and call merge, I get javax.persistence.EntityNotFoundException for Image with ID XXXX. The entity manager can't see that the image was created and have its ID. So the database is not locked, but we can't do anything either.
So what can I do? I don't want to have the database locked during the whole execution and I want to be able to access the created entities!
I am using current version of Spring and Hibernate, everything defined by Annotations. I don't use session factory, I am accessing everything via javax.persistence.EntityManager.
Consider leveraging the Hibernate cascading functionality for persisting object trees in one go with minimal database locking:
#Entity
public class Article {
#OneToMany(cascade=CascadeType.MERGE)
private List<Images> images;
}
#Transactional
public void createArticle() {
//images created as Java objects in memory, no DAOs called yet
List<Image> images = ...
Article article = getArticle();
article.addImages(images);
// cascading will save the article AND the images
em.merge(article);
}
Like this the article AND it's images will get persisted at the end of the transaction in a single transaction with a minimal lifetime. Up until then no locking occurred on the database.
Alternativelly split the createArticle in two #Transactional business methods, one createImages and the other addImagesToArticle and call them one after the other in a third method in another bean:
#Service
public class OtherBean {
#Autowired
private YourService yourService;
// note that no transactional annotation is used, this is intentional
public otherMethod() {
yourService.createImages(); // first transaction - images are committed
yourService.addImagesToArticle(); // second transaction - images are added to article
}
}
You could try setting the transaction isolation on your datasource to READ_UNCOMMITTED, though that can lead to inconsistencies so it is generally not a recommended thing to do.
My best guess is that your transaction isolation level is SERIALIZABLE. That's why the DB locks affected tables for the whole duration of a transaction.
If that's the case change the level to READ_COMMITTED. Hibernate (or any JPA provider) works nicely with this one.
It won't lock anything unless you explicitly call entityManager.lock(someEntity, LockModeType.SomeLockType))
Also when you choose transaction boundaries firstly think in terms of atomicity. If createArticle() is an atomic unit of work it just has to be made transactional, breaking it into smaller transactions for the sake of 'optimization' is wrong.
Related
Sample Scenario
I have a limit that controls the total value of a column. If I make a save that exceeds this limit, I want it to throw an exception. For example;
Suppose I have already added the following data: LIMIT = 20
id
code
value
1
A
15
2
A
5
3
B
12
4
B
3
If I insert (A,2) it exceeds the limit and I want to get exception
If I insert (B,4) the transaction should be successful since it didn't exceed the limit
code and value are interrelated
What can I do
I can check this scenario with required queries. For example, I write a method for it and I can check it in the save method. That's it.
However, I'm looking for a more useful solution than this
For example, is there any annotation when designing Entity ?
Can I do this without calling the method that provides this control every time ?
What examples can I give ?
#UniqueConstraint checking if it adds the same values
Using transaction
The most common and long-accepted way is to simply abstract in a suitable form (in a class, a library, a service, ...) the business rules that govern the behavior you describe, within a transaction:
#Transactional(propagation = Propagation.REQUIRED)
public RetType operation(ReqType args) {
...
perform operations;
...
if(fail post conditions)
throw ...;
...
}
In this case, if when calling a method there is already an open transaction, that transaction will be used (and there will be no interlocks), if there is no transaction created, it will create a new one so that both the operations and the postconditions check are performed within the same transaction.
Note that with this strategy both operation and invariant check transactions can combine multiple transactional states managed by the TransactionManager (e.g. Redis, MySQL, MQS, ... simultaneously and in a coordinated manner).
Using only the database
It has not been used for a long time (in favor of the first way) but using TRIGGERS was the canonical option used some decades ago to check postconditions, but this solution is usually coupled to the specific database engine (e.g. in PostgreSQL or MySQL).
It could be useful in the case where the client making the modifications is unable or unwilling (not safe) to check postconditions (e.g. bash processes) within a transaction. But nowadays it is infrequent.
The use of TRIGGERS may also be preferable in certain scenarios where efficiency is required, as there are certain optimization options within the database scripts.
Neither Hibernate nor Spring Data JPA have anything built-in for this scenario. You have to program the transaction logic in your repository yourself:
#PersistenceContext
EntityManager em;
public addValue(String code, int value) {
var checkQuery = em.createQuery("SELECT SUM(value) FROM Entity WHERE code = :code", Integer.class);
checkQuery.setParameter("code", code);
if (checkQuery.getSingleResult() + value > 20) {
throw new LimitExceededException("attempted to exceed limit for " + code);
}
var newEntity = new Entity();
newEntity.setCode(code);
newEntity.setValue(value);
em.persist(newEntity);
}
Then (it's important!) you have to define SERIALIZABLE isolation level on the #Transactional annotations for the methods that work with this table.
Read more about serializable isolation level here, they have an oddly similar example.
Note that you have to consider retrying the failed transaction. No idea how to do this with Spring though.
You should use a singleton (javax/ejb/Singleton)
#Singleton
public class Register {
#Lock(LockType.WRITE)
public register(String code, int value) {
if(i_can_insert_modify(code, value)) {
//use entityManager or some dao
} else {
//do something
}
}
}
I'm going to become mad with JPA...
I have a JAX-WS Webservice like that
#WebService
public class MyService
{
#EJB private MyDbService myDbService;
...
System.out.println(dmrService.read());
...
}
My EJB contains
#Stateless
public class MyDbService
{
#PersistenceContext(unitName="mypu")
private EntityManager entityManager;
public MyEntity read()
{
MyEntity myEntity;
String queryString = "SELECT ... WHERE e.name = :type";
TypedQuery<MyEntity> query = entityManager.createQuery(queryString,MyEntity.class);
query.setParameter("type","xyz");
try
{
myEntity= query.getSingleResult();
}
catch (Exception e)
{
myEntity= null;
}
return myEntity;
}
In my persistence.xml the mypu has transaction-type="JTA" and a jta-data-source
If I call the webservice, it's working. The entity is retrieved from the db.
Now, using an external tool, I'm changing the value of one field in my record.
I'm calling the webservice again and ... the entity displayed contains the old value.
If I'm deploying again, or if I'm adding a entityManager.refresh(myEntity) after the request, I have the good value again.
In #MyTwoCents answer, Option 2 is to NOT use your 'external' tool for changes, use your application instead. Caching is of more use if your application knows about all the changes going on, or has some way of being informed of them. This is the better option, but only if your application can be the single access point for the data.
Forcing a refresh, via EntityManager.refresh() or through provider specific query hints on specific queries, or by invalidating the cache as described here https://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching#How_to_refresh_the_cache is another option. This forces JPA to go past the cache and access the database on the specific query. Problems with this are you must either know when the cache is stale and needs to be refreshed, or put it on queries that cannot tolerate stale data. If that is fairly frequent or on every query, then your application is going through all the work of maintaining a cache that isn't used.
The last option is to turn off the second level cache. This forces queries to always load entities into an EntityManager from the database data, not a second level cache. You reduce the risk of stale data (but not eliminate it, as the EntityManager is required to have its own first level cache for managed entities, representing a transactional cache), but at the cost of reloading and rebuilding entities, sometimes unnecessarily if they have been read before by other threads.
Which is best depends entirely on the application and its expected use cases.
Don't be mad its fine
Flow goes like this.
You fired a query saying where type="xyz"
Now Hibernate keeps this query or state in cache so that if you fire query again it will return same value if state is not changes.
Now you are updating detail from some external resource.
Hibernate doesnt have any clue about that
So when you fire query again it returns from catch
When you do refresh, hibernate gets detail from Database
Solution :
So you can either add refresh before calling get call
OR
Change the Table value using Hibernate methods in Application so that Hibernate is aware about changes.
OR
Disable Hibernate cache to query each time from DB (not recommended as it will slow down stuff)
We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.
I just started working on a Spring-data, Hibernate, MySQL, JPA project. I switched to spring-data so that I wouldn't have to worry about creating queries by hand.
I noticed that the use of #Transactional isn't required when you're using spring-data since I also tried my queries without the annotation.
Is there a specific reason why I should/shouldn't be using the #Transactional annotation?
Works:
#Transactional
public List listStudentsBySchool(long id) {
return repository.findByClasses_School_Id(id);
}
Also works:
public List listStudentsBySchool(long id) {
return repository.findByClasses_School_Id(id);
}
What is your question actually about? The usage of the #Repository annotation or #Transactional.
#Repository is not needed at all as the interface you declare will be backed by a proxy the Spring Data infrastructure creates and activates exception translation for anyway. So using this annotation on a Spring Data repository interface does not have any effect at all.
#Transactional - for the JPA module we have this annotation on the implementation class backing the proxy (SimpleJpaRepository). This is for two reasons: first, persisting and deleting objects requires a transaction in JPA. Thus we need to make sure a transaction is running, which we do by having the method annotated with #Transactional.
Reading methods like findAll() and findOne(…) are using #Transactional(readOnly = true) which is not strictly necessary but triggers a few optimizations in the transaction infrastructure (setting the FlushMode to MANUAL to let persistence providers potentially skip dirty checks when closing the EntityManager). Beyond that the flag is set on the JDBC Connection as well which causes further optimizations on that level.
Depending on what database you use it can omit table locks or even reject write operations you might trigger accidentally. Thus we recommend using #Transactional(readOnly = true) for query methods as well which you can easily achieve adding that annotation to you repository interface. Make sure you add a plain #Transactional to the manipulating methods you might have declared or re-decorated in that interface.
In your examples it depends on if your repository has #Transactional or not.
If yes, then service, (as it is) in your case - should no use #Transactional (since there is no point using it). You may add #Transactional later if you plan to add more logic to your service that deals with another tables / repositories - then there will be a point having it.
If no - then your service should use #Transactional if you want to make sure you do not have issues with isolation, that you are not reading something that is not yet commuted for example.
--
If talking about repositories in general (as crud collection interface):
I would say: NO, you should not use #Transactional
Why not: if we believe that repository is outside of business context, and it should does not know about propagation or isolation (level of lock). It can not guess in which transaction context it could be involved into.
repositories are "business-less" (if you believe so)
say, you have a repository:
class MyRepository
void add(entity) {...}
void findByName(name) {...}
and there is a business logic, say MyService
class MyService() {
#Transactional(propagation=Propagation.REQUIRED, isolation=Isolation.SERIALIZABLE)
void doIt() {
var entity = myRepository.findByName("some-name");
if(record.field.equal("expected")) {
...
myRepository.add(newEntity)
}
}
}
I.e. in this case: MyService decides what it wants to involve repository into.
In this cases with propagation="Required" will make sure that BOTH repository methods -findByName() and add() will be involved in single transaction, and isolation="Serializable" would make sure that nobody can interfere with that. It will keep a lock for that table(s) where get() & add() is involved into.
But some other Service may want to use MyRepository differently, not involving into any transaction at all, say it uses findByName() method, not interested in any restriction to read whatever it can find a this moment.
I would say YES, if you treat your repository as one that returns always valid entity (no dirty reads) etc, (saving users from using it incorrectly). I.e. your repository should take care of isolation problem (concurrency & data consistency), like in example:
we want (repository) to make sure then when we add(newEntity) it would check first that there is entity with such the same name already, if so - insert, all in one locking unit of work. (same what we did on service level above, but not we move this responsibility to the repository)
Say, there could not be 2 tasks with the same name "in-progress" state (business rule)
class TaskRepository
#Transactional(propagation=Propagation.REQUIRED,
isolation=Isolation.SERIALIZABLE)
void add(entity) {
var name = entity.getName()
var found = this.findFirstByName(name);
if(found == null || found.getStatus().equal("in-progress"))
{
.. do insert
}
}
#Transactional
void findFirstByName(name) {...}
2nd is more like DDD style repository.
I guess there is more to cover if:
class Service {
#Transactional(isolation=.., propagation=...) // where .. are different from what is defined in taskRepository()
void doStuff() {
taskRepository.add(task);
}
}
You should use #Repository annotation
This is because #Repository is used for translating your unchecked SQL exception to Spring Excpetion and the only exception you should deal is DataAccessException
We also use #Transactional annotation to lock the record so that another thread/request would not change the read.
We use #Transactional annotation when we create/update one more entity at the same time. If the method which has #Transactional throws an exception, the annotation helps to roll back the previous inserts.
What would be the easiest way to detach a specific JPA Entity Bean that was acquired through an EntityManager. Alternatively, could I have a query return detached objects in the first place so they would essentially act as 'read only'?
The reason why I want to do this is becuase I want to modify the data within the bean - with in my application only, but not ever have it persisted to the database. In my program, I eventually have to call flush() on the EntityManager, which would persist all changes from attached entities to the underyling database, but I want to exclude specific objects.
(may be too late to answer, but can be useful for others)
I'm developing my first system with JPA right now. Unfortunately I'm faced with this problem when this system is almost complete.
Simply put. Use Hibernate, or wait for JPA 2.0.
In Hibernate, you can use 'session.evict(object)' to remove one object from session. In JPA 2.0, in draft right now, there is the 'EntityManager.detach(object)' method to detach one object from persistence context.
No matter which JPA implementation you use, Just use entityManager.detach(object) it's now in JPA 2.0 and part of JEE6.
If you need to detach an object from the EntityManager and you are using Hibernate as your underlying ORM layer you can get access to the Hibernate Session object and use the Session.evict(Object) method that Mauricio Kanada mentioned above.
public void detach(Object entity) {
org.hibernate.Session session = (Session) entityManager.getDelegate();
session.evict(entity);
}
Of course this would break if you switched to another ORM provider but I think this is preferably to trying to make a deep copy.
Unfortunately, there's no way to disconnect one object from the entity manager in the current JPA implementation, AFAIR.
EntityManager.clear() will disconnect all the JPA objects, so that might not be an appropriate solution in all the cases, if you have other objects you do plan to keep connected.
So your best bet would be to clone the objects and pass the clones to the code that changes the objects. Since primitive and immutable object fields are taken care of by the default cloning mechanism in a proper way, you won't have to write a lot of plumbing code (apart from deep cloning any aggregated structures you might have).
As far as I know, the only direct ways to do it are:
Commit the txn - Probably not a reasonable option
Clear the Persistence Context - EntityManager.clear() - This is brutal, but would clear it out
Copy the object - Most of the time your JPA objects are serializable, so this should be easy (if not particularly efficient).
If using EclipseLink you also have the options,
Use the Query hint, eclipselink.maintain-cache"="false - all returned objects will be detached.
Use the EclipseLink JpaEntityManager copy() API to copy the object to the desired depth.
If there aren't too many properties in the bean, you might just create a new instance and set all of its properties manually from the persisted bean.
This could be implemented as a copy constructor, for example:
public Thing(Thing oldBean) {
this.setPropertyOne(oldBean.getPropertyOne());
// and so on
}
Then:
Thing newBean = new Thing(oldBean);
this is quick and dirty, but you can also serialize and deserialize the object.
Since I am using SEAM and JPA 1.0 and my system has a fuctinality that needs to log all fields changes, i have created an value object or data transfer object if same fields of the entity that needs to be logged. The constructor of the new pojo is:
public DocumentoAntigoDTO(Documento documentoAtual) {
Method[] metodosDocumento = Documento.class.getMethods();
for(Method metodo:metodosDocumento){
if(metodo.getName().contains("get")){
try {
Object resultadoInvoke = metodo.invoke(documentoAtual,null);
Method[] metodosDocumentoAntigo = DocumentoAntigoDTO.class.getMethods();
for(Method metodoAntigo : metodosDocumentoAntigo){
String metodSetName = "set" + metodo.getName().substring(3);
if(metodoAntigo.getName().equals(metodSetName)){
metodoAntigo.invoke(this, resultadoInvoke);
}
}
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
}
}
In JPA 1.0 (tested using EclipseLink) you could retrieve the entity outside of a transaction. For example, with container managed transactions you could do:
public MyEntity myMethod(long id) {
final MyEntity myEntity = retrieve(id);
// myEntity is detached here
}
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public MyEntity retrieve(long id) {
return entityManager.find(MyEntity.class, id);
}
Do deal with a similar case I have created a DTO object that extends the persistent entity object as follows:
class MyEntity
{
public static class MyEntityDO extends MyEntity {}
}
Finally, an scalar query will retrieve the desired non managed attributes:
(Hibernate) select p.id, p.name from MyEntity P
(JPA) select new MyEntity(p.id, p.name) from myEntity P
If you get here because you actually want to pass an entity across a remote boundary then you just put some code in to fool the hibernazi.
for(RssItem i : result.getChannel().getItem()){
}
Cloneable wont work because it actually copies the PersistantBag across.
And forget about using serializable and bytearray streams and piped streams. creating threads to avoid deadlocks kills the entire concept.
I think there is a way to evict a single entity from EntityManager by calling this
EntityManagerFactory emf;
emf.getCache().evict(Entity);
This will remove particular entity from cache.
Im using entityManager.detach(returnObject);
which worked for me.
I think you can also use method EntityManager.refresh(Object o) if primary key of the entity has not been changed. This method will restore original state of the entity.