Hi I wanted to ask about how to close the connection at the end of the method, while second method get called.
#Transactional(value = "transactionManagerDC")
public void Execute()
{
// 1. select from DB - took 2 min
ExecuteAPI()
};
public void ExecuteAPI()
{
// API CALL
};
But the longer API call keep the transaction open and put into ideal and terminated. How to commit and close the transaction before the API call method.
When a transaction is declared using a #Transactional annotation it will end (commit or rollback) when the program control returns from the annotated method, either normally or when an exception occurs.
To have more control over the transactional execution, I would inject a PlatformTransactionManager and use a TransactionTemplate programatically instead of this declarative approach.
This is a fairly good tutorial on programmatic control over the transactions:
https://www.baeldung.com/spring-programmatic-transaction-management
In your example, you want to end the transaction before the API call, basically after selecting your data. I'm failing to see why do need a transaction in the first place.
finally {
if (session != null && session.isOpen()) {
session.close();
}
}
Related
We have a system that sell vouchers and this selling process must be integrated with another system. This integration happens through AWS SQS Queues.
System A process the order, then, at the end of the process, it publishes the message to the SQS Queue called new-orders-queue.
System B reads data from the new-orders-queue, do some sort of processing and then publishes another event to another SQS Queue called another-sqs-queue.
System A read data from the another-sqs-queue and then updates the order created in the step 1
The ordering process (step 1 from above) is big, but nothing tremendously complex. It do some validations within it's database (MySQL) and then write some inserts to some tables.
All of this happen in a #Transactional context from Spring.
The problem is that the step 3 sometimes is happening before the order from step 1 is finally commited to the database, which leds to an error (the order it have to update has not been found on the database, because it hasn't been commited yet). If we retry a second later, the process works normally. This is not happening all times, but we have to address this problem.
Have you seen this already?
Below is reduced (really) pseudo-code from the step 1:
#Transactional
public Result handleNewOrder(OrderData data) {
SqsClient sqsClient = new SqsClient();
validatePrices(data);
doSomeInserts(data);
Result result = createResult(data);
// the last line of the method, just before the return statement, is the line that post the event to the queue
sqsClient.sendEvent(Events.create(result));
return result;
}
At the end of this method annotated with #Transactional, things should be commited, but somehow step 3 is being completed before the commit happens (atleast it seems like it).
Maybe moving the event publishing out of the transactional boundary is the solution (and actually, I'm in favor of it), because this way we can guarantee that the event will be processed only after the transaction has been commited to the database. But we will have to use some sort of retry mechanism in case our communication to SQS present a failure.
Is this the way to go or you have a better solution?
This sounds like it might be an operation that requires multiple transactions.
For example, you might have two methods, each annotated with #Transactional:
#Transactional
public void startHandleNewOrder(OrderData data) {
// make changes to the database here and publish event to new-orders-queue
}
#Transactional
public Result finishHandleNewOrder(OrderData data) {
// await response from another-sqs-queue and compile result
}
This should work assuming that:
A separate service NOT annotated with #Transactional (ie its outside of the transactional barrier) calls these methods in order
Alternately, you could implement this without annotations like so:
#Autowired
PlatformTransactionManager transactionManager;
#PersistenceContext
EntityManager entityManager;
public Result handleNewOrder(OrderData data) {
boolean rollback = true;
TransactionStatus status = getTransaction();
try {
// make changes to the database here and publish event to new-orders-queue
status.commit();
rollback = false;
} finally {
if (rollback)
status.rollback();
}
// this may or may not be necessary if you want to ensure you're reading
// fresh data from the database (otherwise cached from step #1 may be used)
entityManager.clear();
rollback = true;
status = getTransaction();
try {
// wait for a response and compile the result
status.commit();
rollback = false;
return result;
} finally {
if (rollback)
status.rollback();
}
}
private TransactionStatus getTransaction() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
return transactionManager.getTransaction(def);
}
In the end, as suggested by M. Deinum, I've implemented #TransactionalEventListener with the default phase (TransactionPhase.AFTER_COMMIT).
Something like this:
#TransactionalEventListener(classes = {SellVoucherEvent.class})
public void dispatch(SellVoucherEvent event) {
sqsClient.sendMessage("queue", turnEventToString(event));
}
This method is implemented in a #Component class and from my transactional context, I publish the event via a ApplicationEventPublisher (which is injected by Spring).
Example:
private final ApplicationEventPublisher publisher; // injected by Spring
#Transactional
public Result handleNewOrder(OrderData data) {
validatePrices(data);
doSomeInserts(data);
Result result = createResult(data);
SellVoucherEvent event = createEvent();
publisher.publishEvent(event); // publish the application event
return result;
}
Then, after the commit, the dispatch method annotated with #TransactionalEventListener is invoked and then the event is sent to SQS. This way we can guarantee that the event will only be processed after the commit.
I am implementing a backend service with Spring Boot. This service receives a REST request and executes some database operations and finally updates the status of the record.
After that, I would like to start a new async process and execute another data manipulation on the same record this way:
#Service
public class ClassA {
#Autowired
private ClassB classB;
#Autowired
private MyEntityRepository repo;
#Transactional
public void doSomething(Long id) {
// executing the business logic
if (isOk()) {
repo.updateStatus(id, Status.VERIFIED)
}
// I need to commit this DB transaction and return.
// But after this transaction is committed, I need
// to start an async process that must work on the
// same record that was updated before.
classB.complete(id);
}
}
And this is my async method:
#Service
public class ClassB {
#Autowired
private MyEntityRepository repo;
#Async
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void complete(Long id) {
Optional<MyEntity> myEntity = repo.findById(id);
if (myEntity.isPresent() && myEntity.get().getStatus == Status.VERIFIED) {
// execute 'business logic B'
}
}
}
The classA.doSomething() is called multiply times with the same id but business logic B must be executed only when the record status in the DB is VERIFIED.
The above solution works fine.
But my concern is the following: My test database is small and the classA.doSomething() method always finishes and closes its transaction BEFORE the classB.complete() starts to check the status of the same record in the DB. I see in the log that the SQLs are executed in the proper order:
* UPDATE STATUS FROM TABLE ... WHERE ID = 1 // doSomething()
* COMMIT
* SELECT * FROM TABLE WHERE ID = 1 // complete()
But is that 100% guaranteed that the 1st, classA.doSomething() method will always finish and commit the transaction before the 2nd classB.complete() async call check the status of the same record?
If the async method classB.complete() will be executed before classA.doSomething() finishes and execute its DB commit then I will break the business logic and the business logic B will be skipped (the new DB transaction will not see the updated status yet) and that will cause a big issue. Maybe this can happen if the database is huge and the commit takes longer than it takes in my small test DB.
Maybe I can operate with the DB transaction isolation levels described here but changing this can cause another issue in another part of the app.
What is the best way to implement this logic properly which guarantees the proper execution order with the async method?
It is NOT GUARANTEED that "the 1st, classA.doSomething() method will always finish and commit the transaction before the 2nd classB.complete() async call check the status of the same record".
Transactions are implemented as some kind of interceptors appropriate for the framework (this is true for CDI too). The method marked #Transactional is intercepted by the framework, so the transaction will not end before the closing } of the method. As a matter of fact, if the transaction was started by another method higher in the stack, it will end even later.
So, ClassB has plenty of time to run and see inconsistent state.
I would place the 1st part of doSomething in a separate REQUIRES_NEW transaction method (you may need to place it in a different class, depending on how you configured transaction interceptors; if you are using AOP, Spring may be able to intercept calls to methods of the same object, otherwise it relies on the injected proxy object to do the interception and calling a method through this will not activate the interceptor; again this is true for other frameworks as well, like CDI and EJB). The method doSomething calls the 1st part method, which finishes in a new transaction, then ClassB can continue asynchronously.
Now, in that case (as correctly pointed out in the comment), there is a chance that the 1st transaction succeeds and the 2nd fails. If this is the case, you will have to put logic in the system about how to compensate for this inconsistent state. Frameworks cannot deal with it because there is not one recipe, it is a per case "treatment". Some thoughts, in case they help: make sure that the state of the system after the 1st transaction clearly says that the second transaction should complete "shortly after". E.g. keep a "1st tx committed at" field; a scheduled task can check this timestamp and take action if it is too far in the past. JMS gives you all this - you get retries and a dead letter queue for the failed cases.
I am maintaining code which looks like that
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public void downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
//this method should be called even when download finished with error
service.fileDownloadedFinishedNotification(fileId);
}
This is just an example code, to the fileDownloadedFinished we are passing message which we want to display etc, and inside of that we want to mark process as finished with error/success.
So as you can see on download we can get timeout, and after that the fileDownloadedFinishedNotification wont be called, because transaction failed because of the timeout.
I was thinking about extracting notification to other method and call it like this:
#Asynchronous
#TransactionTimeout(value = 1, unit = TimeUnit.HOUR)
public Future<String> downloadFile(Long fileId) {
//This method takes more than 1hour
service.download(fileId)
return new AsyncResult<String>("Test");
}
public void example(){
long id = 15;
String msg = "default stuff";
try {
msg = downloadFile(id).get();
}
catch (Exception e) {
e.printStackTrace();
}
service.fileDownloadedFinishedNotification(fileId, string);
}
But I am not sure if it is good idea, or maybe there is some other functionality, which I can call when timeout is reaced. Something like onTimeout.
Some considerations :
There is no simple way to handle transaction timeout with a listener AFAIK
Annotations use dynamic proxies under the cover, they won't so be applied on a inner call, you have to call your downloadFile from outside (on a bean injected in your caller).
The current transaction will be aborted when fileDownloadedFinishedNotification will be called and so all operations on a transacted resource (DB, etc...) will be rolled back (you may have to invoke the method within a dedicated transaction (e.g. annotate your method with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW).
Assuming the download method retrieves the content across the network and unless you access this last through a dedicated JCA adapter, no exception will be thrown on transaction timeout, the Transaction reaper only marks the current transaction as aborted and release related resources but does not interrupt the thread, only a subsequent access to a MANAGED resource (Datasource, JMS, etc) will throw an exception.
Regarding the last point, while interacting with a un-managed resource the only way to know if the current transaction is still active is to regularly check its state using EJBContext.getRollbackOnly() or by making a dummy access to any managed resource.
There are ways of achieving what you want but a proper implementation would need more information about your level of access to application change.
There are many places where transaction propagation is explained but giving you are running your app in an EJB containeer I would start from here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncih.html
I would read it all chapter but most specific for your case is the container managed transactions here:
https://docs.oracle.com/javaee/6/tutorial/doc/bncij.html
Now assuming you have full access and you can change your database structure the way I would implemented this would be:
You are running your service in a parent transaction T1
Before you invoking the download method call another service to record the download started and the maximum expected time to finish. Do this in a REQUIRED_NEW transaction. This no time consuming database interaction will run in an autonomous transaction T2
Once the above T2 transaction commits your download started record is committed and available to query
Once back in the parent T1 start your download
Record the success in the same record you persisted in T2 if the download successfully finishes.
If you get a timeout the above will never be recorded and the database will still show the download as started and maximum expected time to finish
Define a monitoring process that would kick off at regular times and check the download status. If the expected time to finish have been passed over have that monitoring process alert or record failure or trigger another retry or whatever your business rules are
Hope it helped. Sorry for not codding examples but I thing you will have enough to start with.
Cheers
I'm creating a hibernate Session and try to start a new [Jta]Transaction. Though, the transaction cannot be started because the JtaTransaction that is used in the background seems to be rolled back.
Here is what I'm doing.
Session session = sessionFactory.openSession();
CustomSessionWrapper dpSession = new CustomSessionWrapper(session, this);
if (!session.isClosed() && !session.getTransaction().isActive()) {
session.beginTransaction();
}
Nevertheless the transaction is still not active after the beginTransaction is called. When I debug the beginTransaction method I come to the doBegin method of the JtaTransaction (I do not override this method, I'm just posting the original code of this method).
#Override
protected void doBegin() {
LOG.debug( "begin" );
userTransaction = locateUserTransaction();
try {
if ( userTransaction.getStatus() == Status.STATUS_NO_TRANSACTION ) {
userTransaction.begin();
isInitiator = true;
LOG.debug( "Began a new JTA transaction" );
}
}
catch ( Exception e ) {
throw new TransactionException( "JTA transaction begin failed", e );
}
}
The userTransaction.getStatus() returns Status.STATUS_ROLLEDBACK and no transaction is started. Does anyone know how I can fix that?
UPDATE 1 (you can skip that since that was a mistake, see UPDATE 2)
I found out that there are two threads, one using the main session and another using smaller sessions for logging. The main session (and transaction) is open for a longer period of time, so basically until the operation is finished. It seems that locateUserTransaction always returns the same userTransaction. This means that the main session opens this userTransaction and one of the side transactions commit/rollback that transaction. Does anyone know what to do so that different transactions are retrieved?
UPDATE 2
I found out that I don't have two threads, it is only one threads that opens two sessions in parallel. Each session should then open their own transaction, though both get the same UserTransaction. How can I tell hibernate that each session should get its own [User]Transaction?
Hibernate abstracts both local as JTA transactions behind its own abstraction layer, so I don't see why you'd have to write such low level transaction handling code.
In Java EE you have the app server to manage transactions, in stand alone apps Bitronix + Spring do a job too.
Although you can manage to write your own transaction management logic, I always advice people to reuse what they have already available. Xa/JTA and Hibernate require extensive knowledge to work seamlessly.
Update 1
Two different threads shouldn't use the same user transaction, you should use different transactions for each thread.
I have an EJB3 application which consists of some EJB's for accessing a DB, and exposed via a Session Bean as a web service.
Now there are two things I need to find out:
1) Is there any way I can stop SQL exceptions from causing the web service from throwing a SOAP Fault? The transactions are handled by the container, and currently sql exceptions cause a RollBackException to be thrown, and consequently the transaction to be rolled back (desired behaviour) and the web service to throw a fault (not desired).
2) I wish to extend the webservice to be able to take in a list of entities, and the session bean to persist each. However, I want each entity to be executed in its own transaction, so that if one fails the others are not affected (and again the web service should not fault).
For (1) I have tried to catch the RollBackException, but I assume this is thrown somewhere on another thread, as the catch block is never reached. I assume for (2) I will need to look into User Transactions, but firstly would prefer the container to manage this, and secondly do not know how to force the use of user transactions.
Thanks.
no, you can do all this with container managed transactions (and this is definitely preferable, as managing transactions is a pain).
the gist of the solution is to create a second EJB with a local interface only and the transaction semantics you desire. then your "public" ejb, which the web-service is calling directly, calls into this second ejb via its local interface to do the actual work.
something along the lines of:
public class MyPublicEjb {
#EJB
private MyPrivateImpl impl;
public void doSomething() {
try {
impl.doSomething();
} catch(TXRolledBack) {
// handle rollback ...
}
}
}
I know this looks sort of ugly, but trust me, this is far preferable to directly manipulating transactions.
For (1): Debug your code to find out where the exception is being thrown and what is causing it. Then handle the exception there.
For (2): Wrap each instance with beginTransaction() and commit().
for(each Entity){
try{
//begin transaction
//save entity
//commit
} catch(Exception e) {
//handle Exception, but continue on
}
}