Not sure if 'scope' is the correct term here.
I am using Spring for JPA transaction management (with a Hibernate underneath). My method to preform database transaction is private, but since you can only set #Transactional on a class or on a public method
Since this mechanism is based on proxies, only 'external' method calls coming in through the proxy will be intercepted. This means that 'self-invocation', i.e. a method within the target object calling some other method of the target object, won't lead to an actual transaction at runtime even if the invoked method is marked with #Transactional!
I have set the public entry point of the class as #Transactional.
#Transactional
public void run(parameters) {
//First non-database method, takes a decent amount of time
Data data = getData();
//Call to database
storeData(data);
}
private storeData(data) {
em.persist(data);
}
Is this bad practice? Is Spring keep an open transaction for longer then needed here? I was thinking of move the storeData() method to a DAO class and making it public, but as academic point, I wanted to know if refactoring to public would have any performance benefit.
If there's heavy contention on the DB, keeping transactions as small as possible is definitely crucial -- much more important than public vs private distinctions, which, per se, don't affect performance and scalability. So, be practical...!
The transaction scope is has no effect until your code does something which interacts with the transaction context, in this case the storeData() method. The fact that getData() is non-transactional should not affect the performance on concurrency of your code, since any database locking will only happen when storeData() is reached.
As everyone pointed we should keep transaction as small as possible, so that connection remains available for other request.
Can this be refactored as this
public void run(parameters) {
Data data = getData();
storeData(data);
}
#Transactional
public storeDate(data){em.persist(data)}
Related
Our applications using Spring Cache and need to know if response was returned from cache OR it was actually calculated. We are looking to add a flag in result HashMap that will indicate it. However whatever is returned by method, it is cached so not sure if we can do it in calculate method implementation.
Is there any way to know if calculate method was executed OR return value coming from cache when calling calculate method?
Code we are using for calculate method -
#Cacheable(
cacheNames = "request",
key = "#cacheMapKey",
unless = "#result['ErrorMessage'] != null")
public Map<String, Object> calculate(Map<String, Object> cacheMapKey, Map<String, Object> message) {
//method implementation
return result;
}
With a little extra work, it is rather simple to add a bit of state to your #Cacheable component service methods.
I use this technique when I am answering SO questions like this to show that the value came from the cache vs. the service method by actually computing the value. For example.
You will notice this #Cacheable, #Service class extends an abstract base class (CacheableService) to help manage the "cacheable" state. That way, multiple #Cacheable, #Service classes can utilize this functionality if need be.
The CacheableService class contains methods to query the state of the cache operation, like isCacheMiss() and isCacheHit(). Inside the #Cacheable methods, when invoked due to a "cache miss", is where you would set this bit, by calling setCacheMiss(). Again, the setCacheMiss() method is called like so, inside your #Cacheable service method.
However, a few words of caution!
First, while the abstract CacheableService class manages the state of the cacheMiss bit with a Thread-safe class (i.e. AtomicBoolean), the CacheableService class itself is not Thread-safe when used in a highly concurrent environment when you have multiple #Cacheable service methods setting the cacheMiss bit.
That is, if you have a component class with multiple #Cacheable service methods all setting the cacheMiss bit using setCacheMiss() in a multi-Threaded environment (which is especially true in a Web application) then it is possible to read stale state of cacheMiss when querying the bit. Meaning, the cacheMiss bit could be true or false depending on the state of the cache, the operation called and the interleaving of Threads. Therefore, more work is needed in this case, so be careful if you are relying on the state of the cacheMiss bit for critical decisions.
Second, this approach, using an abstract CacheableService class, does not work for Spring Data (CRUD) Repositories based on an interface. As others have mentioned in the comments, you could encapsulate this caching logic in an AOP Advice and intercept the appropriate calls, in this case. Personally, I prefer that caching, security, transactions, etc, all be managed in the Service layer of the application rather than the Data Access layer.
Finally, there are undoubtedly other limitations you might run into, as the example code I have provided above was never meant for production, only demonstration purposes. I leave it to you as an exercise to figure out how to mold these bits for your needs.
I understand that if we use annotation #Transactional. "save()" method is not necessary. Is it exact?
And for my example:
#Transactional
void methodA() {
...
ObjectEntity objectEntity = objectRepository.find();
methodB(objectEntity);
}
void methodB(ObjectEntity obj) {
...
obj.setName("toto");
objectRepository.save(obj); <-- is it necessary?
}
Thanks for your help
It works like following:
save() attaches the entity to the session and at the end of the transaction as long as there were no exceptions it will all be persisted to the database.
Now if you get the object from the DB (e.g. ObjectEntity objectEntity = objectRepository.find();) then that object is already attached and you don't need to call the save() method.
If the object, however, is detached (e.g. ObjectEntity objectEntity = new ObjectEntity();) then you must use the save() method in order to attach it to the session so that changes made to it are persisted to the DB.
[It is a little too late, but I hope it would be helpful to future readers]:
Within a transaction context, an update to a managed instance is reflected in the persistence storage at the commit/flush time, i.e., in your case at the end of methodB(). However, calling save() comes with a cost in scenarios like yours, as stated in Spring Boot Persistence Best Practices:
The presence or absence of save() doesn’t affect the number or type of
queries, but it still has a performance penalty, because the save()
method fires a MergeEvent behind the scenes, which will execute a
bunch of Hibernate-specific internal operations that are useless in
this case. So, in scenarios such as these, avoid the explicit call of
the save() method.
I have a Singleton-EJB, that reads all objects from a database with a specific state. Then I do something with these objects and set the state to someting else:
#Singleton
public class MyEJB {
#PersistenceContext(unitName = "MyPu")
private EntityManager em;
#Lock(LockType.WRITE)
public void doSomeStuffAndClose() {
List<MyObj> objects = getAllOpenObjects();
for (MyObj obj : objects) {
// do some stuff here...
obj.setClosed(true);
}
}
private List<MyObj> getAllOpenObjects() {
TypedQuery<MyObj> q = em.createQuery("select o from MyObj o "
+ "where o.closed = false", MyObj.class);
return q.getResultList();
}
}
Now, if i would like to ensure that my method cannot be called concurently, I add the annotation #Lock(LockType.WRITE). But the transaction that sets the states in the database is committed AFTER the lock was released and it is possible that the next caller grabs the same objects again.
How could I prevent this?
If you are using Wildfly: This is a bug. https://issues.jboss.org/browse/WFLY-4844 describes your problem which will be fixed in Wildfly 10. There the problem is described as a timer problem which might be the same as yours.
My workaround is to seperate the code that does the work into another bean which is called by the outer (timer) bean. The outer bean method is annotated to not start a transaction (#TransactionAttribute(TransactionAttributeType.NEVER)), so the transaction is started and safely finished in the second new bean.
You could use SELECT FOR UPDATE to serialize the access of the rows.
With JPA 2 use the LockModeType:
http://docs.oracle.com/javaee/6/api/javax/persistence/LockModeType.html
q.setLockMode(LockModeType.PESSIMISTIC_WRITE)
There's no way to do this in JPA (so, in a portable way). Your options might be:
Some JPA implementations allow setting isolation level on a per-query basis (e.g. OpenJPA), some don't (Hibernate). But even in OpenJPA this hint needs to be implemented in a particular database driver, otherwise it has no effect).
Running a native query – consult your database documentation for details.
As a side comment I should say that JPA (and Java EE in general) is not designed with bulk database operations in mind – it's rather for multiple concurrent queries for data items that in most cases don't overlap.
You can invoke from your doSomeStuffAndClose method Stateful Session Bean with implemented SessionSynchronization interface. Than from afterCompletion method in SFSB you can inform singleton bean that data has been commited and can handle another request.
I know that this way we have two really tight coupled beans, but this should solve your problem.
You're using container-managed concurrency (the default). In JavaEE 7 (not sure about older ones, but likely yes) the transaction is guaranteed to commit before the method exits, hence before lock is released. From the JavaEE 7 tutorials:
"Typically, the container begins a transaction immediately before an enterprise bean method starts and commits the transaction just before the method exits. Each method can be associated with a single transaction. Nested or multiple transactions are not allowed within a method."
https://docs.oracle.com/javaee/7/tutorial/doc/transactions003.htm#BNCIJ
If you're experiencing another behavior, check for any cache that might be active (#Cacheable). You may watch another interesting question here: https://stackoverflow.com/questions/26790667/timeout-and-container-managed-concurrency-in-singleton
By the way, LockType(WRITE) is also default, you don't need to explicit it. Hence, getAllObjects is also LockType(WRITE).
May be this might sound a bit mundane, can someone tell me if there is any good practise which says DAO's should not store state information i.e. non static non final member variables? Most of the DAO's that i have come accross mainly contain only static and final variables.
public class CustomerDAO extends CommonDAO{
private String txnid;
private String txnName;
getters....setters..
}
For me, a DAO is "just a pipe", made to encapsulate database communication. It constructs and executes the queries or/and proxies the EntityManager, so at least for JPA, no state is needed, except for the EntityManager instance. Queries do not depend on each other directly.
So I would put the question the other way round - What sensible state could a DAO have?
Strongly no on this one: DAOs' whole reason for existing is to provide a stateless range of DB access methods. Most developers reading your code would be very surprised to find any state.
Also, state as you are illustrating is not thread safe - you could get into a right mess doing that kind of thing.
DAOs are usually implemented following the singleton pattern - meaning there is only one instance of the DAO for every single entity, so state information would be shared among all parts of the application using the DAO
I'm doing some maintenance/evolution on a multi-layered spring 3.0 project. The client is a heavy RCP application invoking some spring beans methods from the service layer (Managers) on a RMI based server.
I have several huge method in the Managers, some of them are doing more than 250 lines.Here is an example : (I've omitted code for clarity)
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public Declaration saveOrUpdateOrDelete(Declaration decla, List<Declaration> toDeleteList ...){
if (decla.isNew()){
// create from scratch and apply business rules for a creation
manager1.compute(decla);
dao1.save(decla);
...
}else if (decla.isCopy() {
// Copy from an other Declaration and apply business rules for a copy
...
}else {
// update Declaration
...
}
if (toDeleteList!=null){
// Delete declarations and apply business rules for a mass delete
...
}
The first 3 branches are mutually exclusive and represent a unit of work. The last branch (delete) can happen simultaneously with other branches.
Isn't it better to divide this method in something more 'CRUDy' for the sake of clarity and maintainability ? I've been thinking of dividing this behemoth into other manager methods like :
public Declaration create(Declaration decla ...){...
public Declaration update(Declaration decla ...){...
public Declaration copyFrom(Declaration decla ...){...
public void delete(List<Declaration> declaList ...){...
But my colleagues say it will transfer complexity and business rules to the client that I will loose the benefit of atomicity etc.. Who is right here ?
The decision what the updateOrCreateOrWhatever really does is made in the client anyway as it has to set the corresponding field in Declaration object.
The client could equally well just call the apropriate method.
That way code is definitely more manageable and testable (less branches to care about).
The only argument for maintaining it as is is the network round-trips mentioned by #Pangea. I think this could be handled by custom dispatcher class. IMO it doesn't form a part of business logic, and as such shouldn't be taken care of in service layer.
Another thing to take into consideration is transaction logic. Do create/update and deletes have to happen in the same transaction? Can both decla and toDelete be not null at the same time?
One of the basic principles to keep in mind when designing remote services is to make it coarse-grained in order to reduce network latency/round-trips. Also, after going through your code, it seems like the method encapsulates a logical unit of work as it is transactional. In this case, I suggest to keep it as it is.
However, you can still refactor it into multiple methods as long as they are not exposed to be invoked remotely thus forcing the client to manage the transactions from client layer. So make them private.
Bad design. If u really have to make the transaction atomic or complete in one trip, create a more specific method instead of having this.
What's the difference of writing:
public Object doIt(Object... obj){
...
}