How to make Spring Integration AMQP queue transactions decoupled? - java

In a Spring Integration application with a AMQP integration with RabbitMQ we experience unexpected behaviour.
The Spring Integration application (java configuration, dsl) consists of 3 flows and 2 persistent queues.
Let's say: flow1 -> queue1 -> flow2 -> queue2 -> flow3
flow1 starts with a Message that eventually gets split-up into 50 messages (.split()). This first flow writes to an AMQP / Rabbit MQ queue.
In the Rabbit UI we observe a jump from 0 messages to 50 messages. Ok so far.
Then I think an 'acknowledgement' follows and the 50 items in rabbit become, so to say, visible for consumers.
Then flow2 reads from this queue and starts processing the messages. Processing takes about 5 seconds per message. After a message is processed, it is written to the next queue (queue2).
The unexpected behaviour is that queue2 gets filled up until all 50 from queue1 are processed (250s later more or less).
I assume that flow2, between queue1 and queue2 handles all incoming requests within one single transaction. And that it will only acknowledge new messages on queue2 after all of the items on queue1 are processed.
I even think I experienced a case in which more items where inserted into queue1 while it was not empty yet. Then, after processing the initial 50 elements in flow2 it still didn't acknowledge them in queue2. It seems it only acknowledges items after queue1 is entirely empty.
Then flow3 starts processing in the same fashion: it only sees the items in queue2 after everything from queue1 is processed by flow2.
The effect is that the 50 messages are processed in batches instead of piece by piece. As soon as 1 message flows out of the .split(), I would like it to flow through all flows individually. So, is there a setting in spring-integration, or in amqp, or in rabbit-mq that default to creating transactions on the entire workload?
Do I need to force the consumer to pick up only 1 message and create a transaction around that message? Or, should I 'acknowledge' messages individually? Or should I configure behavior in a more general fashion in java config?
My initial thought was that the DSL .split() logic was the reason. It adds headers like correlation id's and sequence info. (I guess this is added to allow an aggregator to calculate if everything was processed). For clarity: I have no (explicit) aggregator defined in my app.
My first approach was to clear the split-aggregate headers before inserting into queue1. But that didn't do the trick.
Also .split(s -> s.transactional(false)) didn't circumvent this.
EDIT:
Forget the flow/queue naming above. This is my Spring Integration code. I think I included the most relevant beans here.
The first stage creates empty messages from a poller. These are kind of the events that trigger a request to a feed (50 items in json).
Each of the 50 items (split) is saved in the first rabbit queue.
Then the second stage starts (incoming amqp messages are dropped in myChannel2). Via myChannel3 and myChannel4 it eventually is persisted in the second rabbit queue.
These two stages are handled kind-of in parallel. I see that FIRST_RABBIT_QUEUE gets filled every time with 50 new messages.
I also see that the second stage is executed: SECOND_RABBIT_QUEUE get filled (and the counter of the first queue decreases). All fine.
But now the SECOND_RABBIT_QUEUE keeps growing and is never handled by myFlow3.
If the first queue grows faster than it is emptied, both queues (first,second) keep growing. When it is however emptied (counter back to zero), the third stage (myFlow3) starts working!
My configuration:
#Bean
public MessageChannel myChannel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel2() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel3() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel4() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel5() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel6() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow outputAmqpFlow(final AmqpTemplate amqpTemplate) {
return IntegrationFlows.from(AMQP_OUTPUT)
.handle(Amqp.outboundAdapter(amqpTemplate)
.exchangeName(AmqpConfiguration.TOPIC_EXCHANGE)
.routingKeyExpression("headers['queueRoutingKey']"))
.get();
}
private HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter(AmqpHeaders.CONSUMER_QUEUE);
router.setChannelMapping(AmqpConfiguration.FIRST_RABBIT_QUEUE, "myChannel2");
router.setChannelMapping(AmqpConfiguration.SECOND_RABBIT_QUEUE, "myChannel5");
router.setResolutionRequired(false);
router.setDefaultOutputChannelName("errorChannel");
return router;
}
#Bean
public IntegrationFlow routeIncomingAmqpMessagesFlow(final SimpleMessageListenerContainer simpleMessageListenerContainer,
final Queue firstRabbitQueue,
final Queue secondRabbitQueue,
final Queue thirdRabbitQueue,
final Jackson2JsonMessageConverter jackson2MessageConverter) {
simpleMessageListenerContainer.setQueues(
firstRabbitQueue,
secondRabbitQueue,
thirdRabbitQueue
);
return IntegrationFlows.from(
Amqp.inboundAdapter(simpleMessageListenerContainer)
.messageConverter(jackson2MessageConverter))
.headerFilter("queueRoutingKey")
.route(router())
.get();
}
#Bean
public IntegrationFlow myFlow0() {
return IntegrationFlows.<MessageSource>from(
() -> new GenericMessage<>("trigger flow1"),
c -> c.poller(Pollers.fixedRate(getPeriod(), initialDelay)))
.channel(myChannel1())
.get();
}
#Bean
public IntegrationFlow myFlow1() {
return IntegrationFlows.from(myChannel1())
.handle(String.class, (p, h) -> {
try {
return getLast50MessagesFromWebsite();
} catch (RestClientException e) {
throw new AmqpRejectAndDontRequeueException(e);
}
})
.split()
.enrichHeaders(h -> h.header("queueRoutingKey", AmqpConfiguration.FIRST_RABBIT_QUEUE))
.channel(AMQP_OUTPUT) // persist in rabbit
.get();
}
#Bean
public IntegrationFlow myFlow2_1() {
return IntegrationFlows.from(myChannel2())
.handle(this::downloadAndSave)
.channel(myChannel3())
.get();
}
#Bean
public IntegrationFlow myFlow2_2() {
return IntegrationFlows.from(myChannel3())
.transform(myDomainObjectTransformer)
.handle(this::persistGebiedsinformatieLevering)
.channel(myChannel4())
.get();
}
#Bean
public IntegrationFlow myFlow2_3() {
return IntegrationFlows.from(myChannel4())
.handle(this::confirmMessage)
.enrichHeaders(h -> h.header("queueRoutingKey", AmqpConfiguration.SECOND_RABBIT_QUEUE))
.channel(AMQP_OUTPUT) //persist in rabbit
.get();
}
#Bean
public IntegrationFlow myFlow3() {
return IntegrationFlows.from(myChannel5())
.log(LoggingHandler.Level.INFO)
.get();
}

Transactions are never enabled by default so I don't think that's the issue (unless you have explicitly enabled them).
What you are describing is very odd. Bear in mind the UI is not real time, it only updates every few seconds, so it's not surprising you see it "jump" from 0 to 50.
It seems it only acknowledges items after queue1 is entirely empty.
Consumers know nothing about the queue or its contents.
.split(s -> s.transactional(false)
You should not do that at all; that is enabling transactions (I am surprised it works at all because it should need a transaction manager) but as long as the outbound adapter doesn't have a transactional RabbitTemplate it shouldn't matter.
You need to show your flow definitions and any configuration properties for anyone to help further.

Related

Strategies to implement callback mechanism / notify, when all the asynchrous spring integration flows/threads execution is completed

I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator

Consuming messages in parallel from ActiveMQ

Whenever I am posting a message to the queue the first time the message gets picked up without any issue, but when I drop the second file the message is in the "pending" state the thread sleeping time (2 minutes). To test the Concurrency working in ActiveMQ I have added the bean called ThreadService.
I have the code like below in the JMSConfig.java
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61616");
connectionFactory.setPassword("admin");
connectionFactory.setUserName("admin");
connectionFactory.setTrustedPackages(Arrays.asList("com.jms.domain", "java.util"));
connectionFactory.setMaxThreadPoolSize(1);
return connectionFactory;
}
#Bean(destroyMethod = "stop", initMethod = "start")
#Primary
public PooledConnectionFactory pooledConnectionFactory(ConnectionFactory connectionFactory) {
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections("8");
pooledConnectionFactory.setMaximumActiveSessionPerConnection("10");
return pooledConnectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> queueListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setConcurrency("1-5");
return factory;
}
CamelRouter.java
from("file://E:/Camel")
.bean(ThreadService)
.to("activemq:MessageQueue");
ThreadService.java
public void ThreadService throws Exception {
Thread.sleep(120000);
}
How can I achieve concurrency in ActiveMQ which dequeues message in pending state in parallel?
I am confused because your question subject is about consuming and your route is producing to ActiveMQ
Parallel consumers
If you want to consume in parallel from a JMS queue, you normally configure multiple consumers.
If you want to do this for an individual consumer, you can append it to the endpoint URI
from("activemq:queue:myQueue?concurrentConsumers=5"
if you want to apply this as default for every consumer, you can configure it in your bean setup
#Bean
public JmsConfiguration jmsConfiguration() {
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConnectionFactory(pooledConnectionFactory());
jmsConfiguration.setConcurrentConsumers(5);
return jmsConfiguration;
}
#Bean(name = "activemq")
public ActiveMQComponent activeMq() {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration());
return activeMQComponent;
}
Parallel producers
Well, your JMS producing route has a file consumer that is per definition single threaded to avoid processing the same file with multiple consumers.
However, you can turn your route multithreaded after file consumption with the Threads DSL of Camel
from("file://E:/Camel")
.threads(5) // continue asynchronous from here with 5 threads
.bean(ThreadService)
.to("activemq:MessageQueue");
Like this your "long running task" in ThreadService should no more block other files because the route continues asynchronous with 5 threads from the threads statement. The file consumer stays single threaded.
But be aware! The threads statement breaks the current transaction. The file consumer hands the message over to a new thread. If an error occurs later, the file consumer does not see it.

Spring Integration: how to read selected messages from a queue

If I have one queue and multiple subscribers, how do I code the subscribers to only remove the messages they are interested in? I can use a PublishSubscribeChannel to send the message to all subscribers, but it has no filtering feature, and I'm not clear if the messages are ever removed after delivery. Another option is to read all messages, and filter in the subscriber, but then I need to invent a Kafka-ish behavior for message indexing to prevent messages already seen being processed again.
Well, indeed there is no such a persistent topic abstraction in Spring Integration out-of-the-box. However, since you say you need an in-memory solution, how about to consider to start embedded ActiveMQ and use Jms.publishSubscribeChannel() based on the Topic destination? Right, there is still no selector from the Spring Integration subscribers even for this type of the MessageChannel, but you still can use .filter() to discard messages you are not interested in.
The same you can reach with the Hazelcast ITopic:
#Bean
public ITopic<Message<?>> siTopic() {
return hazelcastInstance().getTopic("siTopic");
}
#Bean
public IntegrationFlow subscriber1() {
return IntegrationFlows.from(
Flux.create(messageFluxSink ->
siTopic()
.addMessageListener(message ->
messageFluxSink.next(message.getMessageObject()))))
.filter("headers.myHeader == foo")
.get();
}
#Bean
public IntegrationFlow subscriber2() {
return IntegrationFlows.from(
Flux.create(messageFluxSink ->
siTopic()
.addMessageListener(message ->
messageFluxSink.next(message.getMessageObject()))))
.filter("headers.myHeader == bar")
.get();
}
Well, actually looking to your plain in-memory model, I even would say that simple QueueChannel and bridge to the PublishSubscribeChannel with the mentioned filter in each subscriber should be fully enough for you:
#Bean
public PollableChannel queueChannel() {
return new QueueChannel();
}
#Bean
#BridgeFrom("queueChannel")
public MessageChannel publishSubscribeChannel() {
return new PublishSubscribeChannel();
}
#Bean
public IntegrationFlow subscriber1() {
return IntegrationFlows.from(publishSubscribeChannel())
.filter("headers.myHeader == foo")
.get();
}
#Bean
public IntegrationFlow subscriber2() {
return IntegrationFlows.from(publishSubscribeChannel())
.filter("headers.myHeader == bar")
.get();
}
UPDATE
And one more option to use instead of PublishSubscribeChannel and filter combination is like RecipientListRouter: https://docs.spring.io/spring-integration/docs/5.0.3.RELEASE/reference/html/messaging-routing-chapter.html#router-implementations-recipientlistrouter

Spring Integration JMS creating ActiveMQ queue instead of topic

I am trying to use an ActiveMQ broker to deliver a message to two consumers listening on an automatic topic, employing Spring Integration facilities.
Here are my configuration beans (in common between publishers and subscribers):
#Value("${spring.activemq.broker-url}")
String brokerUrl;
#Value("${spring.activemq.user}")
String userName;
#Value("${spring.activemq.password}")
String password;
#Bean
public ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerUrl);
connectionFactory.setUserName(userName);
connectionFactory.setPassword(password);
return connectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true); //!!
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setPubSubDomain(true); //!!
return template;
}
Here are beans for consumers:
#Bean(name = "jmsInputChannel")
public MessageChannel jmsInputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
.channel("jmsInputChannel").get();
}
//Consumes the message.
#ServiceActivator(inputChannel="jmsInputChannel")
public void receive(String msg){
System.out.println("Received Message: " + msg);
}
And these are the beans for the producer:
#Bean(name = "jmsOutputChannel")
public MessageChannel jmsOutputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(connectionFactory())
.destination("myTopic")
).get();
}
private static int counter = 1;
#Scheduled(initialDelay=5000, fixedDelay=2000)
public void send() {
String s = "Message number " + counter;
counter++;
jmsOutputChannel().send(MessageBuilder.withPayload(s).build());
}
I am NOT using an embedded ActiveMQ broker. I am using one broker, one producer and two consumers each in their own (docker) container.
My problem is that, while I have invoked setPubSubDomain(true) on both the JmsListenerContainerFactory and the JmsTemplate, my "topics" behave as queues: one consumer prints all the even-numbered messages, while the other prints all the odd-numbered ones.
In fact, by accessing the ActiveMQ web interface, I see that my "topics" (i.e. under the /topics.jsp page) are named ActiveMQ.Advisory.Consumer.Queue.myTopic and ActiveMQ.Advisory.Producer.Queue.myTopic, and "myTopic" does appear in the queues page (i.e. /queues.jsp).
The nodes get started in the following order:
AMQ broker
Consumer 1
Consumer 2
Producer
The first "topic" that gets created is ActiveMQ.Advisory.Consumer.Queue.myTopic, while the producer one appears only after the producer has started, obviously.
I am not an expert on ActiveMQ, so maybe the fact of my producer/consumer "topics" being named ".Queue" is just misleading. However, I do get the semantics described in the official ActiveMQ documentation for queues, rather than topics.
I have also looked at this question already, however all of my employed channels are already of the PublishSubscribeChannel kind.
What I need to achieve is having all messages delivered to all of my (possibly > 2) consumers.
UPDATE: I forgot to mention, my application.properties file already does contain spring.jms.pub-sub-domain=true, along with other settings.
Also, the version of Spring Integration that I am using is 4.3.12.RELEASE.
The problem is, I still get a RR-load-balanced semantics rather than a publish-subscribe semantics.
As for what I can see in the link provided by #Hassen Bennour, I would expect to get a ActiveMQ.Advisory.Producer.Topic.myTopic and a ActiveMQ.Advisory.Consumer.Topic.myTopic row on the list of all topics. Somehow I think I am not using well the Spring Integration libraries, and thus I am setting up a Queue when I want to set up a Topic.
UPDATE 2: Sorry about the confusion. jmsOutputChannel2 is in fact jmsOutputChannel here, I have edited the main part. I am using a secondary "topic" in my code as a check, something for the producer to send message to and receive replies itself. The "topic" name differs as well, so... it's on a separate flow entirely.
I did achieve a little progress by changing the receiver flows in this way:
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
//return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
//.channel("jmsInputChannel").get();
return IntegrationFlows.from(Jms.publishSubscribeChannel(connectionFactory()).destination("myTopic")) //Jms.publishSubscribeChannel() rather than Jms.messageDrivenChannelAdapter()
.channel("jmsInputChannel").get();
}
This produces an advisory topic of type Consumer.Topic.myTopic rather than Consumer.Queue.myTopic on the broker, AND indeed a topic named just myTopic (as I can see from the topics tab). However, once the producer starts, a Producer.Queue advisory topic gets created, and messages get sent there while not being delivered.
The choice of adapter in the input flow seems to determine what kind of advisory consumer topic gets created (Topic vs Queue when switching to Jms.publishSubscribeChannel() from Jms.messageDrivenChannelAdapter()). However, I haven't been able to find something akin for the output flow.
UPDATE 3: Problem solved, thanks to #Hassen Bennour. Recap:
I wired the jmsTemplate() in the producer's Jms.outboundAdapter()
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jsaTemplate())
.destination("myTopic")
).get();
}
And a more complex configuration for the consumer Jms.messageDrivenChannelAdapter():
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
Though this is probably the smoothest and most flexible method, having such a bean...
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}
to wire as a destination for the adapters, rather than just a String.
Thanks again.
add spring.jms.pub-sub-domain=true to application.properties
or
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// the configurer will use PubSubDomain from application.properties if defined or false if not
//so setting it on the factory level need to be set after this
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
return factory;
}
ActiveMQ.Advisory.Consumer.Queue.myTopic is an Advisory topic for a Queue named myTopic
take a look here to read about Advisory
http://activemq.apache.org/advisory-message.html
UPDATE :
update your definitions like below
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jmsTemplate())
.destination("myTopic")
).get();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
or define the Destination as a topic and replace destination("myTopic") by destination(topic())
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}

How to implement a round-robin queue consumer in Spring boot

I am building a message driven service in spring which will run in a cluster and needs to pull messages from a RabbitMQ queue in a round robin manner. The implementation is currently pulling messages off the queue in a first come basis leading to some servers getting backed up while others are idle.
The current QueueConsumerConfiguration.java looks like :
#Configuration
public class QueueConsumerConfiguration extends RabbitMqConfiguration {
private Logger LOG = LoggerFactory.getLogger(QueueConsumerConfiguration.class);
private static final int DEFAULT_CONSUMERS=2;
#Value("${eventservice.inbound}")
protected String inboudEventQueue;
#Value("${eventservice.consumers}")
protected int queueConsumers;
#Autowired
private EventHandler eventtHandler;
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.inboudEventQueue);
template.setQueue(this.inboudEventQueue);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue inboudEventQueue() {
return new Queue(this.inboudEventQueue);
}
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.inboudEventQueue);
container.setMessageListener(messageListenerAdapter());
if (this.queueConsumers > 0) {
LOG.info("Starting queue consumers:" + this.queueConsumers );
container.setMaxConcurrentConsumers(this.queueConsumers);
container.setConcurrentConsumers(this.queueConsumers);
} else {
LOG.info("Starting default queue consumers:" + DEFAULT_CONSUMERS);
container.setMaxConcurrentConsumers(DEFAULT_CONSUMERS);
container.setConcurrentConsumers(DEFAULT_CONSUMERS);
}
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(this.eventtHandler, jsonMessageConverter());
}
}
Is it a case of just adding
container.setChannelTransacted(true);
to the configuration?
RabbitMQ treats all consumers the same - it knows no difference between multiple consumers in one container Vs. one consumer in multiple containers (e.g. on different hosts). Each is a consumer from Rabbit's perspective.
If you want more control over server affinity, you need to use multiple queues with each container listening to its own queue.
You then control the distribution on the producer side - e.g. using a topic or direct exchange and specific routing keys to route messages to a specific queue.
This tightly binds the producer to the consumers (he has to know how many there are).
Or you could have your producer use routing keys rk.0, rk.1, ..., rk.29 (repeatedly, resetting to 0 when 30 is reached).
Then you can bind the consumer queues with multiple bindings -
consumer 1 gets rk.0 to rk.9, 2 gets rk.10 to rk.19, etc, etc.
If you then decide to increase the number of consumers, just refactor the bindings appropriately to redistribute the work.
The container will scale up to maxConcurrentConsumers on demand but, practically, scaling down only occurs when the entire container is idle for some time.

Categories