The below is my requirement:
1. An MDB receives a message
Triggers a asynchronous method in another session bean - asynchronous because this method will be a long running thread and we don't want to hold the MDB thread for long time. The asynchronous method read's records from DB, processes them and calls #3.
Writes to another MQ and then inserts some data into DB. POSTING TO MQ and DB INSERT should be in one transaction.
Here is the implementation :
For #1 - Using an MDB - container managed transaction without any transaction attribute.
For #2 - A stateless session bean - container managed, asynchronous, but transaction attribute as NOT_SUPPORTED (not supported because this is a long running thread, so don't want the transaction to be timed out).
For# 3 - A stateless sessions bean (invoked from #2 for every record that's being read in 2 - executed in a loop) - transaction attribute - REQUIRES_NEW because this method posts to MQ and Inserts into DB.
Issues:
Runtime Exception - When I throw a run time exception from #3, the next records are not processed - the session bean just exits.
Exception - When throwing a custom exception, the message on the queue is not reverted when DB insert fails.
What is the best way to implement or resolve this issue.
I did my best to give details - appreciate any help on this.
Related
I'm trying to understand how transaction works in Spring AMQP. Reading the docs: https://docs.spring.io/spring-amqp/reference/html/#transactions , I know the purpose for enabling transaction in publisher (Best Effort One Phase Commit pattern) but I have no idea why it could be necessary in MessageListener?
Let's take an example:
acknowledgeMode=AUTO
Consume message using #RabbitListener
Insert data into database
Publish message using rabbitTemplate
According to docs: https://docs.spring.io/spring-amqp/reference/html/#acknowledgeMode, if acknowledgeMode is set to AUTO then if any next operation fails, listener will fail too and message will be returned to queue.
Another question is whats the difference between local transaction and external transaction in that case (setting container.setTransactionManager(transactionManager()); or not)?
I would appreciate some clarification :)
Enable transactions in the listener so that any/all downstream RabbitTemplate operations participate in the same transaction.
If there is a failure, the container will rollback the transaction (removing the publishes), nack the message (or messages if batch size is greater than one) and then commit the nacks so the message(s) will be redelivered.
When using an external transaction manager (such as JDBC), the container will synchronize the AMQP transaction with the external transaction.
Downstream templates participate in the transaction regardless of whether it is local (AMQP only) or synchronized.
I'm conducting an integration test for a spring-integration process consisting of 2 integration-flows.
The 1st flow grabs a message from the user/test and pushes it into a jdbc-backed (H2DB) message-queue.
The 2nd flow is driven by a transactional poller which grabs the enqueued message from that same queue and writes it to a folder.
The test periodically checks the folder content and succeeds if it manages to find a file inside the folder before a preset timeout.
Sometimes, the test will find the file and start to shut-down the Spring context before the poller transaction has time to commit, in which case my test dies a horrible death and curses me with a final "cannot commit JDBC transaction, database not available any longer" exception.
1) how can I avoid that my test exits before the poller transaction is committed ?
Note: because the poller transaction doesn't live inside the test thread, I can (probably) not make use of the usual goodies provided by Spring for transactional testing.
My current idea is to check not only for the existence of the file but also to assert if the message queue is empty. I believe that the poller transaction should prevent the test thread from seeing an empty queue until the transaction is committed (no dirty-read).
2) does Spring (integration) default transaction isolation level with H2 guarantees me to avoid dirty-reads ?
It is better to have that poller to be stopped before you exit the test.
Or stop an AbstractEndpoint for that queue channel before the end of that test method. Or consider to use a #DirtiesContext if you rely on the Spring Testing Framework for application context management.
The problem is that poller really works in its own scheduled thread and when your test is ready to assert against some polling results, the process is still works on the background for other polling cycles.
The Spring Integration support is fully based on Spring TX foundation, so default isolationLevel is exactly that one you see on the #Transactional:
/**
* The transaction isolation level.
* <p>Defaults to {#link Isolation#DEFAULT}.
* <p>Exclusively designed for use with {#link Propagation#REQUIRED} or
* {#link Propagation#REQUIRES_NEW} since it only applies to newly started
* transactions. Consider switching the "validateExistingTransactions" flag to
* "true" on your transaction manager if you'd like isolation level declarations
* to get rejected when participating in an existing transaction with a different
* isolation level.
* #see org.springframework.transaction.interceptor.TransactionAttribute#getIsolationLevel()
* #see org.springframework.transaction.support.AbstractPlatformTransactionManager#setValidateExistingTransaction
*/
Isolation isolation() default Isolation.DEFAULT;
I'm creating a resource subscription using transaction 1, before this transaction 1 returns , it adds the request , response and jpa query fetched just created subscription resource into a queue which is taken care by by executor service threads.
This executorService starts separate transaction 2, a uses to jpa query to read a specific attribute of subscription resource ,but it gets null value , but the previous transaction 1 found it(coz may be it itself created it). Transaction 2 need to read the current value but not founding it.
I'm using Ecliplink 2.6, JDK 1.8 and Wildfly 10.Final.
I have looked into whether eclipselink have persisted it to DB or kept in persistenceContext coz, transaction is still not complete and new transaction trying to read it.
That's normal transaction behavior.
As long as T1 is not commited T2 cannot see the data from T1.
It´s more of a conceptual question: I currently have a working activemq queue which is consumed by a Java Spring application. Now I want the queue not to permanently delete the messages until the Java app tells it the message has been correctly saved in DB. After reading documentation I get I have to do it transactional and usa the commit() / rollback() methods. Correct me if I'm wrong here.
My problem comes with every example I find over the internet telling me to configure the app to work this or that way, but my nose tells me I should instead be setting up the queue itself to work the way I want. And I can't find the way to do it.
Otherwise, is the queue just working in different ways depending on how the consumer application is configured to work? What am I getting wrong?
Thanks in advance
The queue it self is not aware of any transactional system but you can pass the 1st parameter boolean to true to create a transactional session but i propose the INDIVIDUAL_ACKNOWLEDGE when creating a session because you can manage messages one by one. Can be set on spring jms DefaultMessageListenerContainer .
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE
And calling this method to ack a message, unless the method is not called the message is considered as dispatched but not ack.
ActiveMQTextMessage.acknowledge();
UPDATE:
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE can be used like this :
onMessage(ActiveMQTextMessage message)
try {
do some stuff in the database
jdbc.commit(); (unless auto-commit is enabled on the JDBC)
message.acknowledge();
}
catch (Exception e) {
}
There are 2 kinds of transaction support in ActiveMQ.
JMS transactions - the commit() / rollback() methods on a Session (which is like doing commit() / rollback() on a JDBC connection)
XA Transactions - where the XASession acts as an XAResource by communicating with the Message Broker, rather like a JDBC Connection takes place in an XA transaction by communicating with the database.
http://activemq.apache.org/how-do-transactions-work.html
Should I use XA transactions (two phase commit?)
A common use of JMS is to consume messages from a queue or topic, process them using a database or EJB, then acknowledge / commit the message.
If you are using more than one resource; e.g. reading a JMS message and writing to a database, you really should use XA - its purpose is to provide atomic transactions for multiple transactional resources. For example there is a small window from when you complete updating the database and your changes are committed up to the point at which you commit/acknowledge the message; if there is a network/hardware/process failure inside that window, the message will be redelivered and you may end up processing duplicates.
http://activemq.apache.org/should-i-use-xa.html
I am using JTA UserTransaction to perform some database and JMS related activity.
The problem goes as below.
1.Start UsertTransaction
2.Perform DB search operation
3.Perform DB updated operation
4.Perform JMS send and recieve operation----> Problematic work flow
5.Perform DB updated operation
6.Commit the transaction.
The 4th step is creating problem as the message sent would not be persisted in the queue until the transaction is committed and due to this JMS receive functionality is broken.
Step 4 cant be performed before stating the JTA transaction as there is lot of dependency on the other steps.
Is there any way I can handle this type of situation.IS there any way to bypass transaction for step4? Any help appreciated.
Thanks