#TransactionalEventListener, #Transactional and #Retryable flow - java

When I publish CustomEvent in the end of #Transactional and #Retryable(value = StaleStateException.class) method foo(), how it is processed?
If I get StaleStateException during commit after method execution, then #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) won't called and by #Retryable Spring will call foo() second time. And what is lifecycle of first CustomEvent in this case? Will it be cleaned up? Or after second success call of foo(),#TransactionalEventListener() will be called twice?

If someone will meet the same worry, answer is that events listened by TransactionalEventListener are alive only transaction-time. RetryTemplate creates new transaction for every call of #Retryable method, so only events from successful attempt will arrive to #TransactionalEventListener.

Related

Difference between "Propagation.REQUIRED" e "Propagation.REQUIRES_NEW"?

I have a service class (named A) which has a method with #Async annotation. This method async from class A, calls another service class (named B) which has the annotation #Transactional(propagation = Propagation.REQUIRED). This class B, calls another service (named C) which also has the very same annotation from class B. And, the class C, calls a method from a repository class.
These sequence is all triggered by a post endpoint with a request body.
Being said, I'm facing an intermittent issue that, sometimes I get the result as expected and sometimes I do not have any result (using exaclty the same request body).
Looking into application's logs, I could see that, when I get no result, the endpoint do not reach the repository class and apparently the thread "dies" in the class A (when the async method is called).
So, my main question is: if I change the type of Propagation from REQUIRED to REQUIRES_NEW in the classe C, it would solve my async problem?
REQUIRED means a transaction will be created before the method is invoked, other nested services with REQUIRED will join this transaction.
REQUIRES_NEW in a nested service will create another transaction which will be independent from the first transaction it (with req_new) could be committed even if the first transaction (req) will be rolled back.
I doubt that the transaction propagation is responsible for the described behaviour ( thread "dies"). I would check the thread pools used for async also that there all invocations run through a proxy.

Commit of transaction happening too late (after the event is processed by another system)

We have a system that sell vouchers and this selling process must be integrated with another system. This integration happens through AWS SQS Queues.
System A process the order, then, at the end of the process, it publishes the message to the SQS Queue called new-orders-queue.
System B reads data from the new-orders-queue, do some sort of processing and then publishes another event to another SQS Queue called another-sqs-queue.
System A read data from the another-sqs-queue and then updates the order created in the step 1
The ordering process (step 1 from above) is big, but nothing tremendously complex. It do some validations within it's database (MySQL) and then write some inserts to some tables.
All of this happen in a #Transactional context from Spring.
The problem is that the step 3 sometimes is happening before the order from step 1 is finally commited to the database, which leds to an error (the order it have to update has not been found on the database, because it hasn't been commited yet). If we retry a second later, the process works normally. This is not happening all times, but we have to address this problem.
Have you seen this already?
Below is reduced (really) pseudo-code from the step 1:
#Transactional
public Result handleNewOrder(OrderData data) {
SqsClient sqsClient = new SqsClient();
validatePrices(data);
doSomeInserts(data);
Result result = createResult(data);
// the last line of the method, just before the return statement, is the line that post the event to the queue
sqsClient.sendEvent(Events.create(result));
return result;
}
At the end of this method annotated with #Transactional, things should be commited, but somehow step 3 is being completed before the commit happens (atleast it seems like it).
Maybe moving the event publishing out of the transactional boundary is the solution (and actually, I'm in favor of it), because this way we can guarantee that the event will be processed only after the transaction has been commited to the database. But we will have to use some sort of retry mechanism in case our communication to SQS present a failure.
Is this the way to go or you have a better solution?
This sounds like it might be an operation that requires multiple transactions.
For example, you might have two methods, each annotated with #Transactional:
#Transactional
public void startHandleNewOrder(OrderData data) {
// make changes to the database here and publish event to new-orders-queue
}
#Transactional
public Result finishHandleNewOrder(OrderData data) {
// await response from another-sqs-queue and compile result
}
This should work assuming that:
A separate service NOT annotated with #Transactional (ie its outside of the transactional barrier) calls these methods in order
Alternately, you could implement this without annotations like so:
#Autowired
PlatformTransactionManager transactionManager;
#PersistenceContext
EntityManager entityManager;
public Result handleNewOrder(OrderData data) {
boolean rollback = true;
TransactionStatus status = getTransaction();
try {
// make changes to the database here and publish event to new-orders-queue
status.commit();
rollback = false;
} finally {
if (rollback)
status.rollback();
}
// this may or may not be necessary if you want to ensure you're reading
// fresh data from the database (otherwise cached from step #1 may be used)
entityManager.clear();
rollback = true;
status = getTransaction();
try {
// wait for a response and compile the result
status.commit();
rollback = false;
return result;
} finally {
if (rollback)
status.rollback();
}
}
private TransactionStatus getTransaction() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
return transactionManager.getTransaction(def);
}
In the end, as suggested by M. Deinum, I've implemented #TransactionalEventListener with the default phase (TransactionPhase.AFTER_COMMIT).
Something like this:
#TransactionalEventListener(classes = {SellVoucherEvent.class})
public void dispatch(SellVoucherEvent event) {
sqsClient.sendMessage("queue", turnEventToString(event));
}
This method is implemented in a #Component class and from my transactional context, I publish the event via a ApplicationEventPublisher (which is injected by Spring).
Example:
private final ApplicationEventPublisher publisher; // injected by Spring
#Transactional
public Result handleNewOrder(OrderData data) {
validatePrices(data);
doSomeInserts(data);
Result result = createResult(data);
SellVoucherEvent event = createEvent();
publisher.publishEvent(event); // publish the application event
return result;
}
Then, after the commit, the dispatch method annotated with #TransactionalEventListener is invoked and then the event is sent to SQS. This way we can guarantee that the event will only be processed after the commit.

spring async method call with JPA transactions

I am implementing a backend service with Spring Boot. This service receives a REST request and executes some database operations and finally updates the status of the record.
After that, I would like to start a new async process and execute another data manipulation on the same record this way:
#Service
public class ClassA {
#Autowired
private ClassB classB;
#Autowired
private MyEntityRepository repo;
#Transactional
public void doSomething(Long id) {
// executing the business logic
if (isOk()) {
repo.updateStatus(id, Status.VERIFIED)
}
// I need to commit this DB transaction and return.
// But after this transaction is committed, I need
// to start an async process that must work on the
// same record that was updated before.
classB.complete(id);
}
}
And this is my async method:
#Service
public class ClassB {
#Autowired
private MyEntityRepository repo;
#Async
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void complete(Long id) {
Optional<MyEntity> myEntity = repo.findById(id);
if (myEntity.isPresent() && myEntity.get().getStatus == Status.VERIFIED) {
// execute 'business logic B'
}
}
}
The classA.doSomething() is called multiply times with the same id but business logic B must be executed only when the record status in the DB is VERIFIED.
The above solution works fine.
But my concern is the following: My test database is small and the classA.doSomething() method always finishes and closes its transaction BEFORE the classB.complete() starts to check the status of the same record in the DB. I see in the log that the SQLs are executed in the proper order:
* UPDATE STATUS FROM TABLE ... WHERE ID = 1 // doSomething()
* COMMIT
* SELECT * FROM TABLE WHERE ID = 1 // complete()
But is that 100% guaranteed that the 1st, classA.doSomething() method will always finish and commit the transaction before the 2nd classB.complete() async call check the status of the same record?
If the async method classB.complete() will be executed before classA.doSomething() finishes and execute its DB commit then I will break the business logic and the business logic B will be skipped (the new DB transaction will not see the updated status yet) and that will cause a big issue. Maybe this can happen if the database is huge and the commit takes longer than it takes in my small test DB.
Maybe I can operate with the DB transaction isolation levels described here but changing this can cause another issue in another part of the app.
What is the best way to implement this logic properly which guarantees the proper execution order with the async method?
It is NOT GUARANTEED that "the 1st, classA.doSomething() method will always finish and commit the transaction before the 2nd classB.complete() async call check the status of the same record".
Transactions are implemented as some kind of interceptors appropriate for the framework (this is true for CDI too). The method marked #Transactional is intercepted by the framework, so the transaction will not end before the closing } of the method. As a matter of fact, if the transaction was started by another method higher in the stack, it will end even later.
So, ClassB has plenty of time to run and see inconsistent state.
I would place the 1st part of doSomething in a separate REQUIRES_NEW transaction method (you may need to place it in a different class, depending on how you configured transaction interceptors; if you are using AOP, Spring may be able to intercept calls to methods of the same object, otherwise it relies on the injected proxy object to do the interception and calling a method through this will not activate the interceptor; again this is true for other frameworks as well, like CDI and EJB). The method doSomething calls the 1st part method, which finishes in a new transaction, then ClassB can continue asynchronously.
Now, in that case (as correctly pointed out in the comment), there is a chance that the 1st transaction succeeds and the 2nd fails. If this is the case, you will have to put logic in the system about how to compensate for this inconsistent state. Frameworks cannot deal with it because there is not one recipe, it is a per case "treatment". Some thoughts, in case they help: make sure that the state of the system after the 1st transaction clearly says that the second transaction should complete "shortly after". E.g. keep a "1st tx committed at" field; a scheduled task can check this timestamp and take action if it is too far in the past. JMS gives you all this - you get retries and a dead letter queue for the failed cases.

Thread submit a task and do not wait for completion in spring

I am writing a service where I want to expose an endpoint which will call another service and if the service call is successful then I want to send back the result to UI/ calling app.
In parallel before sending back the response, I want to execute/submit a task which should run in background and my call should not be dependent on success or failure of this task.
Before returning the response i want to do the-
executorService.execute(object);
This should not be a blocking call..
Any suggestion
Spring Async methods is the way to go here as was suggested in comments.
Some caveats:
Async methods can have different return types, its true that they can return CompletableFuture but this is in case if you called them from some background process and would like to wait/check for their execution status or perhaps, execute something else when the future is ready. In your case it seems that you want "fire-and-forget" behavior. So you should use void return type for your #Async annotated method.
Make sure that you place #EnableAsync. Under that hood it works in a way that it wraps the bean that has #Async methods with some sort of proxy, so the proxy is actually injected into your service. So #EnableAsync turns on this proxy generation mechanism. You can verify that this is the case usually in the debugger and checking the actual type of the injected reference object.
Consider customizing the the task executor to make sure that you're running the async methods with executor that matches your needs. For example, you won't probably want that every invocation of async method would spawn a new thread (and there is an executor that behaves like this). You can read about various executors here for example
Update
Code-wise you should do something like this:
public class MyAsyncHandler {
#Async
public void doAsyncJob(...) {
...
}
}
#Service
public class MyService {
#Autowired // or autowired constructor
private MyAsyncHandler asyncHandler;
public Result doMyMainJob(params) {
dao.saveInDB();
// do other synchronous stuff
Result res = prepareResult();
asyncHandler.doAsyncJob(); // this returns immediately
return res;
}
}

When does the disposer execute?

I have a pojo producer which produces MyResourceManager.
#TraderResouceManager #RequestScoped public MyResourceManager(){ ... ... }
MyResourceManger is injected into SLSB
#inject #TraderDB private MyResourceManager rm;
The disposer is a follows
public void close(#Disposes #TraderResouceManager MyResourceManager rm) {
rm.close();
}
Question
When does the close() execute ?
Is it before closing transaction or after closing the transaction ?
EDIT : The question perhaps needs additional explanation. Lets take a database connection analogy.
I create a #producer with #RequestScoped, using a pojo which creates a new connection per request.
What I need is to dispose the connection at the end of request.
This connection is shared by other beans (for now SLSB). In a given request there may be many beans involved running inside a transaction.
Hence additionally I need to close the connection only after all the transactions are logically closed.
Question
Will I be able to achieve this behavior with above code when I use container managed transactions ?
Close() executes when the current request is terminated since your producer method is request-scoped. If you call your SLSB from a JSF request or a servlet, then the lifeclye of those requests determines when MyResourceManager will be disposed. If your SLSB is remote, it will be terminated as soon as the call returns.

Categories