I'm conducting an integration test for a spring-integration process consisting of 2 integration-flows.
The 1st flow grabs a message from the user/test and pushes it into a jdbc-backed (H2DB) message-queue.
The 2nd flow is driven by a transactional poller which grabs the enqueued message from that same queue and writes it to a folder.
The test periodically checks the folder content and succeeds if it manages to find a file inside the folder before a preset timeout.
Sometimes, the test will find the file and start to shut-down the Spring context before the poller transaction has time to commit, in which case my test dies a horrible death and curses me with a final "cannot commit JDBC transaction, database not available any longer" exception.
1) how can I avoid that my test exits before the poller transaction is committed ?
Note: because the poller transaction doesn't live inside the test thread, I can (probably) not make use of the usual goodies provided by Spring for transactional testing.
My current idea is to check not only for the existence of the file but also to assert if the message queue is empty. I believe that the poller transaction should prevent the test thread from seeing an empty queue until the transaction is committed (no dirty-read).
2) does Spring (integration) default transaction isolation level with H2 guarantees me to avoid dirty-reads ?
It is better to have that poller to be stopped before you exit the test.
Or stop an AbstractEndpoint for that queue channel before the end of that test method. Or consider to use a #DirtiesContext if you rely on the Spring Testing Framework for application context management.
The problem is that poller really works in its own scheduled thread and when your test is ready to assert against some polling results, the process is still works on the background for other polling cycles.
The Spring Integration support is fully based on Spring TX foundation, so default isolationLevel is exactly that one you see on the #Transactional:
/**
* The transaction isolation level.
* <p>Defaults to {#link Isolation#DEFAULT}.
* <p>Exclusively designed for use with {#link Propagation#REQUIRED} or
* {#link Propagation#REQUIRES_NEW} since it only applies to newly started
* transactions. Consider switching the "validateExistingTransactions" flag to
* "true" on your transaction manager if you'd like isolation level declarations
* to get rejected when participating in an existing transaction with a different
* isolation level.
* #see org.springframework.transaction.interceptor.TransactionAttribute#getIsolationLevel()
* #see org.springframework.transaction.support.AbstractPlatformTransactionManager#setValidateExistingTransaction
*/
Isolation isolation() default Isolation.DEFAULT;
Related
I'm trying to understand how transaction works in Spring AMQP. Reading the docs: https://docs.spring.io/spring-amqp/reference/html/#transactions , I know the purpose for enabling transaction in publisher (Best Effort One Phase Commit pattern) but I have no idea why it could be necessary in MessageListener?
Let's take an example:
acknowledgeMode=AUTO
Consume message using #RabbitListener
Insert data into database
Publish message using rabbitTemplate
According to docs: https://docs.spring.io/spring-amqp/reference/html/#acknowledgeMode, if acknowledgeMode is set to AUTO then if any next operation fails, listener will fail too and message will be returned to queue.
Another question is whats the difference between local transaction and external transaction in that case (setting container.setTransactionManager(transactionManager()); or not)?
I would appreciate some clarification :)
Enable transactions in the listener so that any/all downstream RabbitTemplate operations participate in the same transaction.
If there is a failure, the container will rollback the transaction (removing the publishes), nack the message (or messages if batch size is greater than one) and then commit the nacks so the message(s) will be redelivered.
When using an external transaction manager (such as JDBC), the container will synchronize the AMQP transaction with the external transaction.
Downstream templates participate in the transaction regardless of whether it is local (AMQP only) or synchronized.
I have a Spring Boot service that needs to listen to messages across multiple RMQ vhosts. So far I only need to consume messages, though in the short future I might need to publish messages to a third vhost. For this reason I've moved towards explicit configuration of the RMQ connection factory - one connection factory per vhost.
Looking at the documentation the PooledChannelConnectionFactory fits my needs. I do not need strict ordering of the messages, correlated publisher confirms, or caching connections to a single vhost. Everything I do with rabbit is take a message and update an entry in the database.
#Bean
PooledChannelConnectionFactory pcf() throws Exception {
ConnectionFactory rabbitConnectionFactory = new ConnectionFactory();
//Set the credentials
PooledChannelConnectionFactory pcf = new PooledChannelConnectionFactory(rabbitConnectionFactory);
pcf.setPoolConfigurer((pool, tx) -> {
if (tx) {
// configure the transactional pool
}
else {
// configure the non-transactional pool
}
});
return pcf;
}
What I require assistance with is understanding what the difference between the transactional and non-transactional pool is. My understanding of RMQ and AMQP is that everything is async unless you build RPC semantics ontop of it (reply queues and exchanges). Because of that how can this channel pool have transactional properties?
My current approach is to disable one of the configurations by setting min/max to 0, and set the other to a min/max of 1. I do not expect to have extreme volume through the service, and I expect to be horizontally scaling the application which will scale the capacity to consume messages. Is there anything else I should be considering?
The pools are independent; you won't get any transactional channels as long as you don't use a RabbitTemplate with channelTransacted set to true, so there is no need to worry about configuring the pool like that.
Transactions can be used, for example, to atomically send a series of messages (all sent or none sent if the transaction is rolled back). Useful if you are synchronizing with some other transaction, such as JDBC.
I have a Spring Boot / Spring Integration application running that makes use of #Poller in Spring Integration and also #Scheduled on another method in a mostly-unrelated class. The #Poller is for polling an FTP server for new files. However I've found that it seems like the #Poller is somehow interfering with my #Scheduled method.
The #Poller has maxMessagesPerPoll = -1 so that it will process as many files as it can get. However, when I first start my application, there are over 100 files on the FTP server, so it's going to process them all. What I have found is that, if these files are being processed, then the #Scheduler stops triggering at all.
For example, if I set my #Scheduled to have a fixedDelay = 1 to trigger every millisecond and then start my application, the #Scheduled method will trigger a few times, until the #Poller triggers and begins processing messages, at which point my #Scheduled method completely stops triggering. I assumed that simply there was some task queue that was being filled by the #Poller so I simply needed to wait for all of the messages to be processed, but even after the #Poller is completely done and has processed all of the files, the #Scheduled method still does not trigger at all.
My thoughts are that maybe there is some task queue that is being filled by the #Poller, which is breaking my #Scheduled method, but if so, I still don't see any way that I can use a separate task queue for the different methods, or any other possible options for customizing or fixing this issue.
Does anyone have any idea what might be happening to my #Scheduled method, and how can I fix this?
#Poller:
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(cron = "0/5 * * ? * *", maxMessagesPerPoll = "-1"))
public MessageSource<InputStream> myMessageSource() {
//Build my message source
return messageSource;
}
#Scheduled:
#Scheduled(fixedDelay = 6000)
public void myScheduledMethod(){
//Do Stuff
}
They do use the same bean name for their scheduler taskScheduler.
It should only be a problem if you have 10 or more pollers (the default scheduler bean configured by Spring Integration has a pool size of 10 by default). A common mistake is having many queue channels (which hold on to scheduler threads for a second at a time, by default).
If you only have one poller, and not a lot of queue channels, I can't explain why you would get thread starvation.
You can increase the pool size - see Configuring the Task Scheduler.
Or you can use a different scheduler in the ScheduledAnnotationBeanPostProcessor.
As already pointed out, the problem is linked to task schedulers having the same name, although it may occur even if there are fewer than 10 pollers. Spring Boot auto-configuration provides scheduler with default pool size of 1 and registration of this scheduler may happen before the registration of taskScheduler, provided by Spring Integration.
Configuring task scheduler via Spring Integration properties doesn't help as this bean doesn't get registered at all. But providing own instance of TaskScheduler with adjusted pool size, changing pool size of auto-configured scheduler via spring.task.scheduling.pool.size property or excluding TaskSchedulingAutoConfiguration should solve the issue.
In our case, the Poller was used by inbound-channel-adapter to access mail from the IMAP server - but when it polls for an email with large attachments, it blocks the thread used by #Scheduled as it only uses a single thread for scheduling the task.
So we set the Spring property spring.task.scheduling.pool.size=2 - which now allows the #Scheduled method to run in a different thread even if the poller gets blocked (in a different thread) when trying to fetch mail from IMAP server
The below is my requirement:
1. An MDB receives a message
Triggers a asynchronous method in another session bean - asynchronous because this method will be a long running thread and we don't want to hold the MDB thread for long time. The asynchronous method read's records from DB, processes them and calls #3.
Writes to another MQ and then inserts some data into DB. POSTING TO MQ and DB INSERT should be in one transaction.
Here is the implementation :
For #1 - Using an MDB - container managed transaction without any transaction attribute.
For #2 - A stateless session bean - container managed, asynchronous, but transaction attribute as NOT_SUPPORTED (not supported because this is a long running thread, so don't want the transaction to be timed out).
For# 3 - A stateless sessions bean (invoked from #2 for every record that's being read in 2 - executed in a loop) - transaction attribute - REQUIRES_NEW because this method posts to MQ and Inserts into DB.
Issues:
Runtime Exception - When I throw a run time exception from #3, the next records are not processed - the session bean just exits.
Exception - When throwing a custom exception, the message on the queue is not reverted when DB insert fails.
What is the best way to implement or resolve this issue.
I did my best to give details - appreciate any help on this.
It´s more of a conceptual question: I currently have a working activemq queue which is consumed by a Java Spring application. Now I want the queue not to permanently delete the messages until the Java app tells it the message has been correctly saved in DB. After reading documentation I get I have to do it transactional and usa the commit() / rollback() methods. Correct me if I'm wrong here.
My problem comes with every example I find over the internet telling me to configure the app to work this or that way, but my nose tells me I should instead be setting up the queue itself to work the way I want. And I can't find the way to do it.
Otherwise, is the queue just working in different ways depending on how the consumer application is configured to work? What am I getting wrong?
Thanks in advance
The queue it self is not aware of any transactional system but you can pass the 1st parameter boolean to true to create a transactional session but i propose the INDIVIDUAL_ACKNOWLEDGE when creating a session because you can manage messages one by one. Can be set on spring jms DefaultMessageListenerContainer .
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE
And calling this method to ack a message, unless the method is not called the message is considered as dispatched but not ack.
ActiveMQTextMessage.acknowledge();
UPDATE:
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE can be used like this :
onMessage(ActiveMQTextMessage message)
try {
do some stuff in the database
jdbc.commit(); (unless auto-commit is enabled on the JDBC)
message.acknowledge();
}
catch (Exception e) {
}
There are 2 kinds of transaction support in ActiveMQ.
JMS transactions - the commit() / rollback() methods on a Session (which is like doing commit() / rollback() on a JDBC connection)
XA Transactions - where the XASession acts as an XAResource by communicating with the Message Broker, rather like a JDBC Connection takes place in an XA transaction by communicating with the database.
http://activemq.apache.org/how-do-transactions-work.html
Should I use XA transactions (two phase commit?)
A common use of JMS is to consume messages from a queue or topic, process them using a database or EJB, then acknowledge / commit the message.
If you are using more than one resource; e.g. reading a JMS message and writing to a database, you really should use XA - its purpose is to provide atomic transactions for multiple transactional resources. For example there is a small window from when you complete updating the database and your changes are committed up to the point at which you commit/acknowledge the message; if there is a network/hardware/process failure inside that window, the message will be redelivered and you may end up processing duplicates.
http://activemq.apache.org/should-i-use-xa.html