It´s more of a conceptual question: I currently have a working activemq queue which is consumed by a Java Spring application. Now I want the queue not to permanently delete the messages until the Java app tells it the message has been correctly saved in DB. After reading documentation I get I have to do it transactional and usa the commit() / rollback() methods. Correct me if I'm wrong here.
My problem comes with every example I find over the internet telling me to configure the app to work this or that way, but my nose tells me I should instead be setting up the queue itself to work the way I want. And I can't find the way to do it.
Otherwise, is the queue just working in different ways depending on how the consumer application is configured to work? What am I getting wrong?
Thanks in advance
The queue it self is not aware of any transactional system but you can pass the 1st parameter boolean to true to create a transactional session but i propose the INDIVIDUAL_ACKNOWLEDGE when creating a session because you can manage messages one by one. Can be set on spring jms DefaultMessageListenerContainer .
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE
And calling this method to ack a message, unless the method is not called the message is considered as dispatched but not ack.
ActiveMQTextMessage.acknowledge();
UPDATE:
ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE can be used like this :
onMessage(ActiveMQTextMessage message)
try {
do some stuff in the database
jdbc.commit(); (unless auto-commit is enabled on the JDBC)
message.acknowledge();
}
catch (Exception e) {
}
There are 2 kinds of transaction support in ActiveMQ.
JMS transactions - the commit() / rollback() methods on a Session (which is like doing commit() / rollback() on a JDBC connection)
XA Transactions - where the XASession acts as an XAResource by communicating with the Message Broker, rather like a JDBC Connection takes place in an XA transaction by communicating with the database.
http://activemq.apache.org/how-do-transactions-work.html
Should I use XA transactions (two phase commit?)
A common use of JMS is to consume messages from a queue or topic, process them using a database or EJB, then acknowledge / commit the message.
If you are using more than one resource; e.g. reading a JMS message and writing to a database, you really should use XA - its purpose is to provide atomic transactions for multiple transactional resources. For example there is a small window from when you complete updating the database and your changes are committed up to the point at which you commit/acknowledge the message; if there is a network/hardware/process failure inside that window, the message will be redelivered and you may end up processing duplicates.
http://activemq.apache.org/should-i-use-xa.html
Related
I'm trying to understand how transaction works in Spring AMQP. Reading the docs: https://docs.spring.io/spring-amqp/reference/html/#transactions , I know the purpose for enabling transaction in publisher (Best Effort One Phase Commit pattern) but I have no idea why it could be necessary in MessageListener?
Let's take an example:
acknowledgeMode=AUTO
Consume message using #RabbitListener
Insert data into database
Publish message using rabbitTemplate
According to docs: https://docs.spring.io/spring-amqp/reference/html/#acknowledgeMode, if acknowledgeMode is set to AUTO then if any next operation fails, listener will fail too and message will be returned to queue.
Another question is whats the difference between local transaction and external transaction in that case (setting container.setTransactionManager(transactionManager()); or not)?
I would appreciate some clarification :)
Enable transactions in the listener so that any/all downstream RabbitTemplate operations participate in the same transaction.
If there is a failure, the container will rollback the transaction (removing the publishes), nack the message (or messages if batch size is greater than one) and then commit the nacks so the message(s) will be redelivered.
When using an external transaction manager (such as JDBC), the container will synchronize the AMQP transaction with the external transaction.
Downstream templates participate in the transaction regardless of whether it is local (AMQP only) or synchronized.
we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.
I have a Spring Boot service that needs to listen to messages across multiple RMQ vhosts. So far I only need to consume messages, though in the short future I might need to publish messages to a third vhost. For this reason I've moved towards explicit configuration of the RMQ connection factory - one connection factory per vhost.
Looking at the documentation the PooledChannelConnectionFactory fits my needs. I do not need strict ordering of the messages, correlated publisher confirms, or caching connections to a single vhost. Everything I do with rabbit is take a message and update an entry in the database.
#Bean
PooledChannelConnectionFactory pcf() throws Exception {
ConnectionFactory rabbitConnectionFactory = new ConnectionFactory();
//Set the credentials
PooledChannelConnectionFactory pcf = new PooledChannelConnectionFactory(rabbitConnectionFactory);
pcf.setPoolConfigurer((pool, tx) -> {
if (tx) {
// configure the transactional pool
}
else {
// configure the non-transactional pool
}
});
return pcf;
}
What I require assistance with is understanding what the difference between the transactional and non-transactional pool is. My understanding of RMQ and AMQP is that everything is async unless you build RPC semantics ontop of it (reply queues and exchanges). Because of that how can this channel pool have transactional properties?
My current approach is to disable one of the configurations by setting min/max to 0, and set the other to a min/max of 1. I do not expect to have extreme volume through the service, and I expect to be horizontally scaling the application which will scale the capacity to consume messages. Is there anything else I should be considering?
The pools are independent; you won't get any transactional channels as long as you don't use a RabbitTemplate with channelTransacted set to true, so there is no need to worry about configuring the pool like that.
Transactions can be used, for example, to atomically send a series of messages (all sent or none sent if the transaction is rolled back). Useful if you are synchronizing with some other transaction, such as JDBC.
I am using JTA UserTransaction to perform some database and JMS related activity.
The problem goes as below.
1.Start UsertTransaction
2.Perform DB search operation
3.Perform DB updated operation
4.Perform JMS send and recieve operation----> Problematic work flow
5.Perform DB updated operation
6.Commit the transaction.
The 4th step is creating problem as the message sent would not be persisted in the queue until the transaction is committed and due to this JMS receive functionality is broken.
Step 4 cant be performed before stating the JTA transaction as there is lot of dependency on the other steps.
Is there any way I can handle this type of situation.IS there any way to bypass transaction for step4? Any help appreciated.
Thanks
We have a Java listener that reads text messages off of a queue in JBossMQ. If we have to reboot JBoss, the listener will not reconnect and start reading messages again. We just get messages in the listener's log file every 2 minutes saying it can't connect. Is there something we're not setting in our code or in JBossMQ? I'm new to JMS so any help will be greatly appreciated. Thanks.
You should implement in your client code javax.jms.ExceptionListener. You will need a method called onException. When the client's connection is lost, you should get a JMSException, and this method will be called automatically. The only thing you have to look out for is if you are intentionally disconnecting from JBossMQ-- that will also throw an exception.
Some code might look like this:
public void onException (JMSException jsme)
{
if (!closeRequested)
{
this.disconnect();
this.establishConnection(connectionProps, queueName, uname, pword, clientID, messageSelector);
}
else
{
//Client requested close so do not try to reconnect
}
}
In your "establishConnection" code, you would then implement a while(!initialized) construct that contains a try/catch inside of it. Until you are sure you have connected and subscribed properly, stay inside the while loop catching all JMS/Naming/etc. exceptions.
We've used this method for years with JBossMQ and it works great. We have never had a problem with our JMS clients not reconnecting after bouncing JBossMQ or losing our network connection.
I'd highly recommend you use the Spring abstractions for JMS such as the MessageListenerContainer to deal with reconnection, transactions and pooling for you. You just need to supply a MessageListener and configure the MessageListenerContainer with the ConnectionFactory and the container does the rest.
If you're purely a listener and do no other JMS calls other than connection setup, then the "onException() handler" answer is correct.
If you do any JMS calls in your code, just using onException() callback isn't sufficient. Problems are relayed from the JMS provider to the app either via an exception on a JMS method call or through the onException() callback. Not both.
So if you call any JMS methods from your code, you'll also want to invoke that reconnection logic if you get any exceptions on those calls.
Piece of advice from personal experience. Upgrade to JBoss Messaging. I've seen it in production for 4 months without problems. It has fully transparent failover - amongst many other features.
Also, if you do go with Spring, be very careful with the JmsTemplate.