I'm creating application using Spring Boot with RabbitMQ.
I've created configuration for Rabbit like this:
#Configuration
public class RabbitConfiguration {
public static final String RESEND_DISPOSAL_QUEUE = "RESEND_DISPOSAL";
#Bean
public Queue resendDisposalQueue() {
return new Queue(RESEND_DISPOSAL_QUEUE, true);
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory (ConnectionFactory connectionFactoryr) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
return factory;
}
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory){
return new RabbitTemplate(connectionFactory);
}
}
Also I've created listener for Rabbit messages like this:
#RabbitListener(queues = RESEND_DISPOSAL_QUEUE)
public void getResendDisposalPayload(String messageBody){
LOGGER.info("[getResendDisposalPayload] message = {}", messageBody);
// And there is some business logic
}
All works pretty good, but there is one problem.
When I got exception in method getResendDisposalPayload which listens RESEND_DISPOSAL_QUEUE queue (for example temporary problem with database) Rabbit starts resend last not processed message without any delay. It produces a big amount of log and for some reason uncomfortable for my system.
As I've read in this article https://www.baeldung.com/spring-amqp-exponential-backoff "While using a Dead Letter Queue is a standard way to deal with failed messages".
In order to use this pattern I've to create RetryOperationsInterceptor which defines count attempt to deliver message and delay between attempts.
For example:
#Bean
public RetryOperationsInterceptor retryInterceptor() {
return RetryInterceptorBuilder.stateless()
.backOffOptions(1000, 3.0, 10000)
.maxAttempts(3)
.recoverer(messageRecoverer)
.build();
}
It sounds very good but only one problem: I can't define infinity attempt amount in options maxAttempts.
After maxAttempts I have to save somewhere broken message and deal with it in the future. It demands some extra code.
The question is: Is there any way to configure Rabbit to infinity resend broken messages with some delay, say with one second delay?
Rabbit starts resend last not processed message without any delay
That's how redelivery works: it re-push the same message again and again, until you ack it manually or drop altogether. There is no delay in between redeliveries just because an new message is not pulled from the queue until something is done with this one.
I can't define infinity attempt amount in options maxAttempts
Have you tried an Integer.MAX_VALUE? Pretty decent number of attempts.
The other way is to use a Delayed Exchange: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange.
You can configure that retry with a RepublishMessageRecoverer to publish into a your original queue back after some attempts are exhausted: https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
I'm creating an application that sends messages for time-expensive processing to a consumer using RabbitMQ. However, I need to prioritize messages. When a message with high priority arrives, it must be processed even if all consumer instances are processing other messages.
AFAIK there is no possibility to preempt processing low-priority messages and switch to processing high-priority messages in Spring Boot and RabbitMQ.
Is it possible to create consumers that accept only high-priority messages or to run additional set of consumers on the fly when all other are busy and high-priority messages arrive?
I tried to add queues with x-max-priority=10 flag and to increase number of consumers but it doesn't solve my problem.
Imagine that we run 50 consumers and send 50 messages with low priority. While time-expensive processing is being performed, a new message arrives with high priority but it cannot be processed at once because all 50 consumers are busy.
There is a part of configuration that sets number of consumers
#Bean
public SimpleRabbitListenerContainerFactory
rabbitListenerContainerFactory(SimpleRabbitListenerContainerFactoryConfigurer configurer,
#Qualifier("rabbitConnectionFactory") ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setConcurrentConsumers(50);
factory.setMaxConcurrentConsumers(100);
return factory;
}
Is there a way to create a set of consumers that accept messages high-priority messages (e.g. higher than 0) or to create consumer on the fly for high-priority messages?
I don't know about a way to implement the preemptive strategy you describe, but there's a number of alternative things that you could consider.
Priority Setting
The first thing to take into account is the priority support in RabbitMQ itself.
Consider this excerpt from RabbitMQ in Depth by Gavin M. Roy:
“As of RabbitMQ 3.5.0, the priority field has been implemented as per the AMQP specification. It’s defined as an integer with possible values of 0 through 9 to be used for message prioritization in queues. As specified, if a message with a priority of 9 is published, and subsequently a message with a priority of 0 is published, a newly connected consumer would receive the message with the priority of 0 before the message with a priority of 9”.
e.g.
rabbitTemplate.convertAndSend("Hello World!", message -> {
MessageProperties properties = MessagePropertiesBuilder.newInstance()
.setPriority(0)
.build();
return MessageBuilder.fromMessage(message)
.andProperties(properties)
.build();
});
Priority-based Exchange
A second alternative is to define a topic exchange and define a routing key that considers your priority.
For example, consider an exchange of events using a routing key of pattern EventName.Priority e.g. OrderPlaced.High, OrderPlaced.Normal or OrderPlaced.Low.
Based on that you could have a queue bound to just orders of high priority i.e. OrderPlaced.High and a number of dedicated consumers just for that queue.
e.g.
String routingKey = String.format("%s.%s", event.name(), event.priority());
rabbit.convertAndSend(routingKey, event);
With a listener like the one below where the queue high-priority-orders is bound to the events exchange for event OrderPlaced and priority High using routing key OrderPlaced.High.
#Component
#RabbitListener(queues = "high-priority-orders", containerFactory="orders")
public class HighPriorityOrdersListener {
#RabbitHandler
public void onOrderPlaced(OrderPlacedEvent orderPlaced) {
//...
}
}
Obviously, you will need a dedicated thread pool (in the orders container factory above) to attend the high priority requests.
There is no mechanism in the AMQP protocol to "select" messages from a queue.
You might want to consider using discrete queues with dedicated consumers instead.
BTW, this is not spring related; general questions about RabbitMQ should be directed to the rabbitmq-users Google group.
Our application consumes data from several queues that are provided by RabbitMQ. To increase throughput we start several threads per queue that do blocking takes from those queues.
For a new service we want to use Spring Boot and again have several threads per queue that take data from those queues. Here is the canonical Spring Boot code for processing data that arrived from some queue:
#StreamListener(target = Processor.INPUT)
#SendTo(Processor.OUTPUT)
public Message<SomeData> process(Message<SomeData> message) {
SomeData result = service.process(message.getPayload());
return MessageBuilder
.withPayload(result)
.copyHeaders(message.getHeaders())
.build();
}
Question is now how to make Spring Boot spawn several threads to serve one queue instead of a single thread. Throughput is very critical for our application, hence the need for this.
Check the available properties, search for rabbitmq.
spring.rabbitmq.listener.simple.concurrency= # Minimum number of listener invoker threads
That looks promising
You can set the concurrent consumers for the queue when you configurate it.
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory
(MessageConverter contentTypeConverter,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
// the number of consumers is set as 5
factory.setConcurrentConsumers(5);
configurer.configure(factory, connectionFactory);
factory.setMessageConverter(contentTypeConverter);
return factory;
}
I am processing a high volume stream ~ 500+ msgs per second, The data is consumed off Spring AMQP+Rabbit using a SimpleMessageListenerContainer with 10 concurrent consumers, I have to do some checks on the Db every 15 mins and reload certain properties for processing, this is done with a quartz trigger which fires every 15 mins, stops the SimplelistenerContainer, does the necessary work and starts the Container once again.
Everything works perfectly when the app starts up, when the trigger fires and the Container restarts, I see the same message being delivered multiple times,this causes a lot of duplicates. There are no exeptions thrown by the consumers.
The Message Listener
class RoundRobinQueueListener implements MessageListener {
#Override
public void onMessage(Message message) { //do processing
}
}
During app startup set up parallel consumers and start the consumer
final SimpleMessageListenerContainer messageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
RoundRobinQueueListener roundRobinListener = RoundRobinQueueListener.class.newInstance();
messageListenerContainer.setQueueNames(queueName);
messageListenerContainer.setMessageListener(roundRobinListener);
messageListenerContainer.setConcurrentConsumers(10);
messageListenerContainer.setChannelTransacted(true);
The quartz trigger
void execute(JobExecutionContext context) throws JobExecutionException {
messageListenerContainer.stop()
//Do db task, other processing
messageListenerContainer.start()
}
Looks like your messages are now acknowledged by the consumer. If you are not using auto acknowledge mode, you need to acknowledge the message by yourself (This can also be configured at the SimpleMessageListenerContainer). Otherwise, the broker presumes that the message was not processed successfully and tries to deliver it again.
Note that I'd like multiple message listeners to handle successive messages from the topic concurrently. In addition I'd like each message listener to operate transactionally so that a processing failure in a given message listener would result in that listener's message remaining on the topic.
The spring DefaultMessageListenerContainer seems to support concurrency for JMS queues only.
Do I need to instantiate multiple DefaultMessageListenerContainers?
If time flows down the vertical axis:
ListenerA reads msg 1 ListenerB reads msg 2 ListenerC reads msg 3
ListenerA reads msg 4 ListenerB reads msg 5 ListenerC reads msg 6
ListenerA reads msg 7 ListenerB reads msg 8 ListenerC reads msg 9
ListenerA reads msg 10 ListenerB reads msg 11 ListenerC reads msg 12
...
UPDATE:
Thanks for your feedback #T.Rob and #skaffman.
What I ended up doing is creating multiple DefaultMessageListenerContainers with concurrency=1 and then putting logic in the message listener so that only one thread would process a given message id.
You don't want multiple DefaultMessageListenerContainer instances, no, but you do need to configure the DefaultMessageListenerContainer to be concurrent, using the concurrentConsumers property:
Specify the number of concurrent
consumers to create. Default is 1.
Specifying a higher value for this
setting will increase the standard
level of scheduled concurrent
consumers at runtime: This is
effectively the minimum number of
concurrent consumers which will be
scheduled at any given time. This is a
static setting; for dynamic scaling,
consider specifying the
"maxConcurrentConsumers" setting
instead.
Raising the number of concurrent
consumers is recommendable in order to
scale the consumption of messages
coming in from a queue. However, note
that any ordering guarantees are lost
once multiple consumers are
registered. In general, stick with 1
consumer for low-volume queues.
However, there's big warning at the bottom:
Do not raise the number of concurrent consumers for a topic.
This would lead to concurrent
consumption of the same message, which
is hardly ever desirable.
This is interesting, and makes sense when you think about it. The same would occur if you had multiple DefaultMessageListenerContainer instances.
I think perhaps you need to rethink your design, although I'm not sure what I'd suggest. Concurrent consumption of pub/sub messages seems like a perfectly reasonable thing to do, but how to avoid getting the same message delivered to all of your consumers at the same time?
At least in ActiveMQ what you want is totally supported, his name is VirtualTopic
The concept is:
You create a VirtualTopic (Simply creating a Topic using the prefix VirtualTopic. ) eg. VirtualTopic.Color
Create a consumer subscribing to this VirtualTopic matching this pattern Consumer.<clientName>.VirtualTopic.<topicName> eg. Consumer.client1.VirtualTopic.Color, doing it, Activemq will create a queue with that name and that queue will subscribe to VirtualTopic.Color then every message published to this Virtual Topic will be delivered to client1 queue, note that it works like rabbitmq exchanges.
You are done, now you can consume client1 queue like every queue, with many consumers, DLQ, customized redelivery policy, etc.
At this point I think you understood that you can create client2, client3 and how many subscribers you want, all of them will receive a copy of the message published to VirtualTopic.Color
Here the code
#Component
public class ColorReceiver {
private static final Logger LOGGER = LoggerFactory.getLogger(MailReceiver.class);
#Autowired
private JmsTemplate jmsTemplate;
// simply generating data to the topic
long id=0;
#Scheduled(fixedDelay = 500)
public void postMail() throws JMSException, IOException {
final Color colorName = new Color[]{Color.BLUE, Color.RED, Color.WHITE}[new Random().nextInt(3)];
final Color color = new Color(++id, colorName.getName());
final ActiveMQObjectMessage message = new ActiveMQObjectMessage();
message.setObject(color);
message.setProperty("color", color.getName());
LOGGER.info("status=color-post, color={}", color);
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.color"), message);
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client1.VirtualTopic.color", containerFactory = "colorContainer"
selector = "color <> 'RED'"
)
public void genericReceiveMessage(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver, color={}", color);
}
/**
* Listen only red colors messages
*
* the destination ClientId have not necessary exists (it means that his name can be a fancy name), the unique requirement is that
* the containers clientId need to be different between each other
*/
#JmsListener(
// destination = "Consumer.redColorContainer.VirtualTopic.color",
destination = "Consumer.client1.VirtualTopic.color",
containerFactory = "redColorContainer", selector = "color='RED'"
)
public void receiveMessage(ObjectMessage message) throws InterruptedException, JMSException {
LOGGER.info("status=RED-color-receiver, color={}", message.getObject());
}
/**
* Listen all colors messages
*/
#JmsListener(
destination = "Consumer.client2.VirtualTopic.color", containerFactory = "colorContainer"
)
public void genericReceiveMessage2(Color color) throws InterruptedException {
LOGGER.info("status=GEN-color-receiver-2, color={}", color);
}
}
#SpringBootApplication
#EnableJms
#EnableScheduling
#Configuration
public class Config {
/**
* Each #JmsListener declaration need a different containerFactory because ActiveMQ requires different
* clientIds per consumer pool (as two #JmsListener above, or two application instances)
*
*/
#Bean
public JmsListenerContainerFactory<?> colorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-5");
configurer.configure(factory, connectionFactory);
// container.setClientId("aId..."); lets spring generate a random ID
return factory;
}
#Bean
public JmsListenerContainerFactory<?> redColorContainer(ActiveMQConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
// necessary when post serializable objects (you can set it at application.properties)
connectionFactory.setTrustedPackages(Arrays.asList(Color.class.getPackage().getName()));
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-2");
configurer.configure(factory, connectionFactory);
return factory;
}
}
public class Color implements Serializable {
public static final Color WHITE = new Color("WHITE");
public static final Color BLUE = new Color("BLUE");
public static final Color RED = new Color("RED");
private String name;
private long id;
// CONSTRUCTORS, GETTERS AND SETTERS
}
Multiple Consumers Allowed on the Same Topic Subscription in JMS 2.0, while this was not the case with JMS 1.1. Please refer:
https://www.oracle.com/technetwork/articles/java/jms2messaging-1954190.html
This is one of those occasions where the differences in transport providers bubble up through the abstraction of JMS. JMS wants to provide a copy of the message for each subscriber on a topic. But the behavior that you want is really that of a queue. I suspect that there are other requirements driving this to a pub/sub solution which were not described - for example other things need to subscribe to the same topic independent of your app.
If I were to do this in WebSphere MQ the solution would be to create an administrative subscription which would result in a single copy of each message on the given topic to be placed onto a queue. Then your multiple subscribers could compete for messages on that queue. This way your app could have multiple threads among which the messages are distributed, and at the same time other subscribers independent of this application could dynamically (un)subscribe to the same topic.
Unfortunately, there's no generic JMS-portable way of doing this. You are dependent on the transport provider's implementation to a great degree. The only one of these I can speak to is WebSphere MQ but I'm sure other transports support this in one way or another and to varying degrees if you are creative.
Here's a possibility:
1) create only one DMLC configured with the bean and method to handle the incoming message. Set its concurrency to 1.
2) Configure a task executor with its #threads equal to the concurrency you desire. Create an object pool for objects which are actually supposed to process a message. Give a reference of task executor and object pool to the bean you configured in #1. Object pool is useful if the actual message processing bean is not thread-safe.
3) For an incoming message, the bean in DMLC creates a custom Runnable, points it to the message and the object pool, and gives it to task executor.
4) The run method of Runnable gets a bean from the object pool and calls its 'process' method with the message given.
#4 can be managed with a proxy and the object pool to make it easier.
I haven't tried this solution yet, but it seems to fit the bill. Note that this solution is not as robust as EJB MDB. Spring e.g. will not discard an object from the pool if it throws a RuntimeException.
Creating a custom task executor seemingly solved the issue for me, w/o duplicate processing:
#Configuration
class BeanConfig {
#Bean(destroyMethod = "shutdown")
public ThreadPoolTaskExecutor topicExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setAllowCoreThreadTimeOut(true);
executor.setKeepAliveSeconds(300);
executor.setCorePoolSize(4);
executor.setQueueCapacity(0);
executor.setThreadNamePrefix("TOPIC-");
return executor;
}
#Bean
JmsListenerContainerFactory<?> topicListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer, #Qualifier("topicExecutor") Executor topicExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true);
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(false);
factory.setSubscriptionDurable(false);
factory.setTaskExecutor(topicExecutor);
return factory;
}
}
class MyBean {
#JmsListener(destination = "MYTOPIC", containerFactory = "topicListenerFactory", concurrency = "1")
public void receiveTopicMessage(SomeTopicMessage message) {}
}
I've run into the same problem. I'm currently investigating RabbitMQ, which seems to offer a perfect solution in a design pattern they call "work queues." More info here: http://www.rabbitmq.com/tutorials/tutorial-two-java.html
If you're not totally tied to JMS you might look into this. There might also be a JMS to AMQP bridge, but that might start to look hacky.
I'm having some fun (read: difficulties) getting RabbitMQ installed and running on my Mac but think I'm close to having it working, I will post back if I'm able to solve this.
on server.xml configs:
so , in maxSessions you can identify the number of sessions you want.
Came across this question. My configuration is :
Create a bean with id="DefaultListenerContainer", add property name="concurrentConsumers" value="10" and property name="maxConcurrentConsumers" value ="50".
Works fine, so far. I printed the thread id and verified that multiple threads do get created and also reused.