Grails save() tries to create new object when it should update - java

In my service code, I am trying to create or update a Person domain object:
#Transactional
def someServiceMethod(some params....) {
try{
def person = Person.findByEmail(nperson.email.toLowerCase())
if (!person) {
person = new Person()
person.properties = nperson.properties
} else {
// update the person parameters (first/last name)
person.firstName = nperson.firstName
person.lastName = nperson.lastName
person.phone = nperson.phone
}
if (person.validate()) {
person.save(flush: true)
//... rest of code
}
// rest of other code....
} catch(e) {
log.error("Unknown error: ${e.getMessage()}", e)
e.printStackTrace()
return(null)
}
Now above code OCCASIONALLY when trying to save a Person object with already existing email throws following exception:
Hibernate operation: could not execute statement; SQL [n/a]; Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry 'someemail#gmail.com' for key 'email_UNIQUE'
This is very strange because I am already finding the person by email and hence the save() should try to update the record instead of creating the new one.
I was wondering why is this happening!
EDIT:
I am on grails 2.4.5 and Hibernate plugin in BuildConfig is:
runtime ':hibernate4:4.3.8.1'
EDIT2:
My application is on multiple servers hence synchronized block won't work

If this is concurrency issue, here is what we do in such case. We have a lot of concurrent background processes which work on the same tables. If there is such operation it indeed is in synchronized block, so code may look like:
class SomeService {
static transactional = false //service cannot be transactional
private Object someLock = new Object() //synchronized block on some object must be used
def someConcurrentSafeMethod(){
synchronized(someLock){
def person = Person.findByEmail(nperson.email.toLowerCase())
...
person.save(flush: true) // flush is very important, must be done in synchronized block
}
}
}
There are few important points to make this working (from our experience, not official):
Service cannot be transactional - if service is transactional, transaction is commited after the method returns value and synchronization inside method will not be enough. Programmatic transactions may be another way
synchronized method is not enough synchronized def someConcurrentSafeMethod() will not work - probably because service is wrapped in proxy
Session MUST be flushed inside synchronized block
every object which will be saved, should be read in synchronized block, if you pass it from external method, you may run into optimistic locking failed exception
UPDATED
Because application is deployed on distributed system, above will not solve the issue here (still may help others). After discussion we had on Slack, I just summarize potential ways to do that:
pessimistic locking of updated objects and lock of whole table for inserts (if possible)
moving 'dangerous' database related methods to single server with some API like REST and calling it from other deployments (and using synchronized approach from above)
using multiple save approach - if operation fails, catch exception and try again. This is supported by integration libraries like Spring Integration or Apache Camel and is one of enterprise patterns. See request-handler-advice-chain for Spring Integration as an example
use something to queue operations, for example JMS server
If anyone has more ideas please share them.

Related

Finding caller method from a Spring service

Summarize
Goal
I have an application that is written in Java using the Spring framework. There is a service that is being used as the handler for grabbing and releasing locks in the database (InnoDB). My goal is to be able to log the grabbing and releasing of the locks to create a lock history. For each lock interaction, I would like to know not only the name of the lock involved, but also where this request is coming from in the code (if possible, class name, method name, and line number).
My expected database entry will look something like this:
id
lock_name
clazz
method
line
lock_date
unlock_date
unlock_type
0
tb_member
MemberTools
createMember
123
2021-12-23 10:16:00
2021-12-23 10:16:01
COMMIT
1
tb_member
MemberTools
editMember
234
2021-12-23 10:16:01
2021-12-23 10:16:02
COMMIT
I would like to know if there is an easy way to obtain this given that I am using the Spring framework.
Describe
So far, I have tried two things:
Forcing the caller to pass a reference to itself or its current StackTraceElement (using Thread.currentThread().getStackTrace()[1]). This is not only extremely repetitive, but it also is prone to human error, as a developer might not realize that they need to pass in some reference to themselves.
Inside of the lock service, use the getStackTrace method and walk through the elements to find the "correct" one. This is made very hard by Spring and the fact that before a call actually reaches the inside of a class with the #Service annotation, the call stack is muddled by numbers of calls between proxies and generated classes and such. Unless there is a deterministic way to find the number of calls in between the Service and the caller, then this doesn't seem like a good way either.
I have referenced this stack overflow question while working, but these do not take into account the usage of the Spring framework.
Show
A reproducible example will look something like this. First, the structure:
root\
LockService.java
getLock()
MemberTools.java
createMember()
LockService.java:
#Service
public class LockService {
#Transactional
public Lock getLock(String key) {
Lock searchLock = new Lock();
searchLock.setKey(key);
lockMapper.getLock(searchLock);
LockHistory lockHistory = new LockHistory();
// Fill out lockHistory object...
lockMapper.markAsLocked(lockHistory);
attachTransactionCompletedListener(lockHistory);
}
private void attachTransactionCompletedListener(LockHistory lockHistory) {
/* Attach a listener onto the current spring transaction so that we
* can update the database entry when the transaction finishes and
* the lock is released.
*/
}
}
MemberTools.java:
public class MemberTools {
#Autowired
LockService lockService;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void createMember() {
lockService.getLock("tb_member");
/* Do create member stuff...
* When this returns, the lock will be released
* (either from COMMIT, ROLLBACK, or UNKNOWN Spring error)
*/
}
}
By the time the getLock() method is reached, the stack trace is muddled with many calls that Spring inserts (proxies, reflections, etc.). Putting a breakpoint in this function and examining Thread.currentThread().getStackTrace() will show this.

Event observer method calling service methods?

OK, what I'm trying to accomplish is the following:
In a java enterprise bean I want to move a file to a different directory unless a database operation fails (namely, I want to store the correct location of the file in the database. Now if something went wrong within the transaction and it's rolled back, my database points to a wrong location for the file if I already moved the file. Not good.).
I tried to fire an event with an observing method using the transaction phase AFTER_SUCCESS to move the file. So far so good. But the file move could also fail (maybe I don't have access to the target directory or something like that) and I want to write that failure into the data base as well. Unfortunately it seems like the observer method does not provide me with a transaction and my method call fails.
Is the idea of calling a service method from the observing method a bad one? Or am I doing it wrong?
Generally, you should work with transactional resource firstly and then with non-transactional. The reason is that you can roll-back transactional resource but you cannot roll-back non transactional one.
I mean: if you was able to update row in database and then trying to move file and it fails, you can safely roll-back database update. But, if you moving file and it success, but you cannot update database for some reason - you cannot roll-back moved file.
In your particular case I would suggest to not actually move the file, but copy it instead. So that in database you will always have actual location of new copy. And in different thread you can delete old copies somehow. You need to use copies because IOException can be thrown when actual file was moved and when you rollback transaction in database you will end up with wrong old location. Try to use this approach (using EJB container-managed transactions; you can safely find Spring variant of that):
#TransactionAttribute(REQUIRED)
void move(String newLocation, int fileId) throws CouldNotMoveException, DatabaseException {
try {
database.updateFileLocation(fileId, newLocation);
} catch (Exception exc) {
throw new DatabaseException(exc);
}
try {
file.copyFile(fileId, newLocation);
} catch (IOException exc) {
throw new CouldNotMoveException(exc);
}
}
You will need to create your exceptions like this in order to rollback your transactions (or just use RuntimeException - check documentation on your container about reacting to exception and rollback policies):
#ApplicationException(rollback = true)
public class DatabaseException extends Exception {
// omited
}
#ApplicationException(rollback = true)
public class CouldNotMoveException extends Exception {
// omited
}
Here you client code can react on CouldNotMoveException and write the wrong move in database so you will fullfil your requirements.

Creating a transaction through DataSource.getTransaction(def) does not set the currentTransactionStatus to it

We have an application with three databases. Two of them are only very seldomly updated. We tried JPA to create transactions around it and it worked for the databases, but grails then did not work on different places (gsp related I am told). This was tried quite a while ago (and not by me).
Due to delivery pressure we needed a solution that at least works for us, so I created a new aspect for the methods changing data in multiple databases. I got this to work, it is a fairly simple approach.
In the aspect we request to start a transaction for each data source, by calling getTransaction(TransactionDefinition def) with the propagation set to REQUIRES_NEW. We then proceed and finally rollback or commit depending on the outcome of the call.
However, one test flow failed. This is the scenario where the code requests a rollback by calling TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(). Of the three TransactionStatusses obtained initially, none actually returns isRollbackOnly() with true. However calling TransactionAspectSupport.currentTransationStatus().isRollbackOnly() does return true. So this seems to point to a different transaction status.
I have not been able to figure out how to make this work, other than checking this additional status. I could not find a way to change the currentTransactionStatus to the one of created TransactionStatus. Looking at the TransactionTemplate implementation, I seem to do things correctly (it also just calls getTransaction() on the datasource).
The code calling the decorated method has specified #Transactional(propagation=Propagation.NOT_SUPPORTED), so I expected no currentTransactionStatus, but one is there.
However, if it is not there the proxied code will not be able to request a rollback the standard way, which I want to be able to fix.
So the question is, how to start a transaction correctly from an Aspect so that the currentTransactionStatus is set correctly or how to set the currentTransactionStatus to what I think is the correct one.
Regards,
Wim Veldhuis.
I finally figured it out.
#Transactional leads to a different code path, where eventually TransactionAspectSupport.invokeWithinTransaction is invoked. This method will set up the current transaction correctly.
So in order to make my approach working, I needed to derive from TransactionAspectSupport, do a number of cast operations so I could get to the correct values for the invokeWithinTransaction call, and within the guarded function block use getTransaction(def) to obtain txns for the OTHER databases. I have choose the most important database to be the one used for invoke...
To make it work I had also to provide a TransactionAttributeSource, that returned my default transaction attributes.That one is stored into the TransactionAspectSupport base class during initialization.
#Around("#annotation(framework.db.MultiDbTransactional)")
public Object multiDbTransaction(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
// Get class and method, needed for parent invocation. We need to cast to the actual
// implementation
MethodInvocationProceedingJoinPoint mipJoinPoint = (MethodInvocationProceedingJoinPoint) proceedingJoinPoint;
MethodSignature signature = (MethodSignature) mipJoinPoint.getSignature();
Class<?> clazz = mipJoinPoint.getTarget().getClass();
Method method = signature.getMethod();
return invokeWithinTransaction(method, clazz, new InvocationCallback() {
#Override
public Object proceedWithInvocation() throws Throwable {
// This class will create the other transactions, not of interest here.
MultiDbTxnContext ctx = new MultiDbTxnContext();
ctx.startTransactions();
/*
* We have started the transactions, so do the job. We mimic DEFAULT spring behavior
* regarding exceptions, so runtime exceptions roll back, the rest commits.
*/
try {
Object result = proceedingJoinPoint.proceed();
ctx.finishTransactions();
return result;
} catch (Error | RuntimeException re) {
ctx.rollbackTransactions();
throw re;
} catch (Throwable t) {
ctx.commitTransactions();
throw t;
}
}
});
}

How to disable JBPM persistance?

I'm trying to implement a few tests with JBPM 6. I'm currently working a a simple hello world bpmn2 file, which is loaded correctly.
My understading of the documentation ( Click ) is that persistence should be disabled by default. "By default, if you do not configure the process engine otherwise, process instances are not made persistent."
However, when I try to implement it, and without doing anything special to enable persistence, I hit persistence related problems every time I try to do anything.
javax.persistence.PersistenceException: No Persistence provider for EntityManager named org.jbpm.persistence.jpa
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:69)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:47)
at org.jbpm.runtime.manager.impl.jpa.EntityManagerFactoryManager.getOrCreate(EntityManagerFactoryManager.java:33)
at org.jbpm.runtime.manager.impl.DefaultRuntimeEnvironment.init(DefaultRuntimeEnvironment.java:73)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:400)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:74)</blockquote>
I Create my runtime environement the following way,
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultInMemoryBuilder()
.persistence(false)
.addAsset(ResourceFactory.newClassPathResource("examples/helloworld.bpmn2.xml"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("examples/newBPMNProcess.bpmn"), ResourceType.BPMN2)
.get();
As my understanding is that persistence should be disabled by default, I don't see what I'm doing wrong. It could be linked to something included in some of my dependencies, but I don't have found anything on it either.
Has anybody faced the same issue already or has any advice.
Thanks
A RuntimeManager is a combination of a process engine and a human task service. The human task service needs persistence (to start the human tasks etc.), that's why it's still asking for a datasource, even if you configure the engine to not use persistence.
If you want to use an engine without our human task service, you don't need persistence at all, but I wouldn't use a RuntimeManager in that case, simply create a ksession from the kbase directly:
http://docs.jboss.org/jbpm/v6.1/userguide/jBPMCoreEngine.html#d0e1805
The InMemoryBuilder which you use in your code is supposed to (as per API documentation) not be persistent, but it is actually adding a persistence manager to the environment, just with an InMemoryMapper instead of a JPAMapper because of the way the init() method in DefaultRuntimeEnvironment is implemented:
public void init() {
if (emf == null && getEnvironmentTemplate().get(EnvironmentName.CMD_SCOPED_ENTITY_MANAGER) == null) {
emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
}
addToEnvironment(EnvironmentName.ENTITY_MANAGER_FACTORY, emf);
if (this.mapper == null) {
if (this.usePersistence) {
this.mapper = new JPAMapper(emf);
} else {
this.mapper = new InMemoryMapper();
}
}
}
As you can see above, this still tries to getOrCreate() a persistence unit (I have seen a better implementation which also checks for the value of persistence attribute somewhere, but the issue here is, DefaultRuntimeEnvironment doesn't do that).
What you need to start with to get away without persistence is a newEmptyBuilder():
RuntimeEnvironment env = RuntimeEnvironmentBuilder.Factory.get()
.newEmptyBuilder()
.knowledgeBase(KieServices.Factory.get().getKieClasspathContainer().getKieBase("my-knowledge-base"))
// ONLY REQUIRED FOR PER-REQUEST AND PER-INSTANCE STRATEGY
//.addEnvironmentEntry("IS_JTA_TRANSACTION", false)
.persistence(false)
.get();
Do mind though that this will only work for Singleton runtime managers - PerProcessInstance and PerRequest expect to be able to suspend a running transaction if necessary, which is only possible if you have an entity manager to be able to persist state.
For testing with those two strategies also use addEnvironmentEntry() above.

Coherence and container managed transactions

I'm implementing simultaneous write into database and Oracle Coherence 3.7.1 and want to make whole operation transactional.
I would like to have a critique on my approach.
Currently, I've created façade class like this:
public class Facade {
#EJB
private JdbcDao jdbcDao;
#EJB
private CoherenceDao coherenceDao;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
private void updateMethod(List<DomainObject> list) {
jdbcDao.update(list);
coherenceDao.update(list);
}
}
I guess JDBC DAO would not need to do anything specific about transactions, it something happens Hibernate would throw some kind of RuntimeException.
public class JdbcDao {
private void update(List<DomainObject> list) {
// I presume there is nothing specific I have to do about transactions.
// if I don't catch any exceptions it would work just fine
}
}
Here is interesting part. How do I make Coherence support transactions?
I guess I should open coherence transaction inside update() method and on any exceptions inside it I should throw RuntimeException myself?
I currently thinking of something like this:
public class CoherenceDao {
private void update(List<DomainObject> list) {
// how should I make it transactional?
// I guess it should somehow throw RuntimeException?
TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
// gather the cache(s) into a Collection
Collection txnCollection = Collections.singleton(mapTx);
try {
mapTx.begin();
// put into mapTx here
CacheFactory.commitTransactionCollection(txnCollection, 1);
} catch (Throwable t) {
CacheFactory.rollbackTransactionCollection(txnCollection);
throw new RuntimeException();
}
}
}
Would this approach work as expected?
I know that you asked this question a year ago and my answer now might not be as much as value for you after a year but I still give it a try.
What you are trying to do works as long as there is no RuneTimeException after the method call of coherenceDao.update(list); You might be assuming that you don't have any line of codes after that line but that's not the whole story.
As an example: You might have some deferrable constraints in your Database. Those constraints will be applied when the container is trying to commit the transaction which is on method exit of updateMethod(List<DomainObject> list) and after your method call to coherenceDao.update(list). Another cases would be like a connection timeout to database after that coherenceDao.update(list) is executed but still before the transaction commit.
In both cases your update method of CoherenceDAO class is executed safe and sound and your coherence transaction is not rollbacked anymore which will put your cache in an inconsistent state because you will get a RuneTimeException because of those DB or Hibernate Exceptions and that will cause your container managed transaction to be rollbacked!

Categories