I am building a message driven service in spring which will run in a cluster and needs to pull messages from a RabbitMQ queue in a round robin manner. The implementation is currently pulling messages off the queue in a first come basis leading to some servers getting backed up while others are idle.
The current QueueConsumerConfiguration.java looks like :
#Configuration
public class QueueConsumerConfiguration extends RabbitMqConfiguration {
private Logger LOG = LoggerFactory.getLogger(QueueConsumerConfiguration.class);
private static final int DEFAULT_CONSUMERS=2;
#Value("${eventservice.inbound}")
protected String inboudEventQueue;
#Value("${eventservice.consumers}")
protected int queueConsumers;
#Autowired
private EventHandler eventtHandler;
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.inboudEventQueue);
template.setQueue(this.inboudEventQueue);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue inboudEventQueue() {
return new Queue(this.inboudEventQueue);
}
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.inboudEventQueue);
container.setMessageListener(messageListenerAdapter());
if (this.queueConsumers > 0) {
LOG.info("Starting queue consumers:" + this.queueConsumers );
container.setMaxConcurrentConsumers(this.queueConsumers);
container.setConcurrentConsumers(this.queueConsumers);
} else {
LOG.info("Starting default queue consumers:" + DEFAULT_CONSUMERS);
container.setMaxConcurrentConsumers(DEFAULT_CONSUMERS);
container.setConcurrentConsumers(DEFAULT_CONSUMERS);
}
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(this.eventtHandler, jsonMessageConverter());
}
}
Is it a case of just adding
container.setChannelTransacted(true);
to the configuration?
RabbitMQ treats all consumers the same - it knows no difference between multiple consumers in one container Vs. one consumer in multiple containers (e.g. on different hosts). Each is a consumer from Rabbit's perspective.
If you want more control over server affinity, you need to use multiple queues with each container listening to its own queue.
You then control the distribution on the producer side - e.g. using a topic or direct exchange and specific routing keys to route messages to a specific queue.
This tightly binds the producer to the consumers (he has to know how many there are).
Or you could have your producer use routing keys rk.0, rk.1, ..., rk.29 (repeatedly, resetting to 0 when 30 is reached).
Then you can bind the consumer queues with multiple bindings -
consumer 1 gets rk.0 to rk.9, 2 gets rk.10 to rk.19, etc, etc.
If you then decide to increase the number of consumers, just refactor the bindings appropriately to redistribute the work.
The container will scale up to maxConcurrentConsumers on demand but, practically, scaling down only occurs when the entire container is idle for some time.
Related
In the rabbitmq service i have configured 8 queues, i am using spring client to send messages to rabbit MQ, i could be able to send messages to respective queues but at most of the times only a single queue is running and the rest of the queues are in idle state, to give turn to all queues i have reduced the configured the prefetch count to 20, so that all messages doesn't go to workers(consumers) and cause other queues to remain idle, in-spite of this i don't see multiple queues running parallel.
Below is the spring configuration i have used to set prefetch count
#Bean
public CachingConnectionFactory rabbitConnectionFactory() throws Exception
{
com.rabbitmq.client.ConnectionFactory factory = new com.rabbitmq.client.ConnectionFactory();
factory.setHost(host);
factory.setUsername(username);
factory.setPassword(password);
factory.setPort(5671);
factory.useSslProtocol();
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(factory);
return connectionFactory;
}
i am different multiple container factory for each queue, some of them are mentioned below.(not sure why we use factories, my assumption is to give configuration like prefetch to queues)
#Bean(name = "ordersimplecontainer")
public SimpleRabbitListenerContainerFactory simpleOrderListenerContainerFactory() throws Exception
{
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConcurrentConsumers(6);
factory.setMaxConcurrentConsumers(8);
factory.setPrefetchCount(20);
return factory;
}
#Bean(name = "productsimplecontainer")
public SimpleRabbitListenerContainerFactory simpleProductListenerContainerFactory() throws Exception
{
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConcurrentConsumers(3);
factory.setMaxConcurrentConsumers(4);
factory.setPrefetchCount(20);
return factory;
}
in the listener code i am passing the respective connection factory as shown below
#RabbitListener( queues = "<queueName>", containerFactory = "<factoryName>", autoStartup = "${autocreateworker}")
public void myListener(SftpStockDailySyncAsyncRequest sftpStockDailySyncRequest) {
}
The issue i had is at any given time only one queue is running and the other queues are in idle state and because of this important queues are waiting until they get chance to go to running state, the screen below depicts the scenario.
Please correct me on how to resolve this issue.
As of now I have two worker machines which are listening to RabbitMQ server.
Whenever I am posting a message to the queue the first time the message gets picked up without any issue, but when I drop the second file the message is in the "pending" state the thread sleeping time (2 minutes). To test the Concurrency working in ActiveMQ I have added the bean called ThreadService.
I have the code like below in the JMSConfig.java
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61616");
connectionFactory.setPassword("admin");
connectionFactory.setUserName("admin");
connectionFactory.setTrustedPackages(Arrays.asList("com.jms.domain", "java.util"));
connectionFactory.setMaxThreadPoolSize(1);
return connectionFactory;
}
#Bean(destroyMethod = "stop", initMethod = "start")
#Primary
public PooledConnectionFactory pooledConnectionFactory(ConnectionFactory connectionFactory) {
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections("8");
pooledConnectionFactory.setMaximumActiveSessionPerConnection("10");
return pooledConnectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> queueListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setConcurrency("1-5");
return factory;
}
CamelRouter.java
from("file://E:/Camel")
.bean(ThreadService)
.to("activemq:MessageQueue");
ThreadService.java
public void ThreadService throws Exception {
Thread.sleep(120000);
}
How can I achieve concurrency in ActiveMQ which dequeues message in pending state in parallel?
I am confused because your question subject is about consuming and your route is producing to ActiveMQ
Parallel consumers
If you want to consume in parallel from a JMS queue, you normally configure multiple consumers.
If you want to do this for an individual consumer, you can append it to the endpoint URI
from("activemq:queue:myQueue?concurrentConsumers=5"
if you want to apply this as default for every consumer, you can configure it in your bean setup
#Bean
public JmsConfiguration jmsConfiguration() {
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConnectionFactory(pooledConnectionFactory());
jmsConfiguration.setConcurrentConsumers(5);
return jmsConfiguration;
}
#Bean(name = "activemq")
public ActiveMQComponent activeMq() {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration());
return activeMQComponent;
}
Parallel producers
Well, your JMS producing route has a file consumer that is per definition single threaded to avoid processing the same file with multiple consumers.
However, you can turn your route multithreaded after file consumption with the Threads DSL of Camel
from("file://E:/Camel")
.threads(5) // continue asynchronous from here with 5 threads
.bean(ThreadService)
.to("activemq:MessageQueue");
Like this your "long running task" in ThreadService should no more block other files because the route continues asynchronous with 5 threads from the threads statement. The file consumer stays single threaded.
But be aware! The threads statement breaks the current transaction. The file consumer hands the message over to a new thread. If an error occurs later, the file consumer does not see it.
In a Spring Integration application with a AMQP integration with RabbitMQ we experience unexpected behaviour.
The Spring Integration application (java configuration, dsl) consists of 3 flows and 2 persistent queues.
Let's say: flow1 -> queue1 -> flow2 -> queue2 -> flow3
flow1 starts with a Message that eventually gets split-up into 50 messages (.split()). This first flow writes to an AMQP / Rabbit MQ queue.
In the Rabbit UI we observe a jump from 0 messages to 50 messages. Ok so far.
Then I think an 'acknowledgement' follows and the 50 items in rabbit become, so to say, visible for consumers.
Then flow2 reads from this queue and starts processing the messages. Processing takes about 5 seconds per message. After a message is processed, it is written to the next queue (queue2).
The unexpected behaviour is that queue2 gets filled up until all 50 from queue1 are processed (250s later more or less).
I assume that flow2, between queue1 and queue2 handles all incoming requests within one single transaction. And that it will only acknowledge new messages on queue2 after all of the items on queue1 are processed.
I even think I experienced a case in which more items where inserted into queue1 while it was not empty yet. Then, after processing the initial 50 elements in flow2 it still didn't acknowledge them in queue2. It seems it only acknowledges items after queue1 is entirely empty.
Then flow3 starts processing in the same fashion: it only sees the items in queue2 after everything from queue1 is processed by flow2.
The effect is that the 50 messages are processed in batches instead of piece by piece. As soon as 1 message flows out of the .split(), I would like it to flow through all flows individually. So, is there a setting in spring-integration, or in amqp, or in rabbit-mq that default to creating transactions on the entire workload?
Do I need to force the consumer to pick up only 1 message and create a transaction around that message? Or, should I 'acknowledge' messages individually? Or should I configure behavior in a more general fashion in java config?
My initial thought was that the DSL .split() logic was the reason. It adds headers like correlation id's and sequence info. (I guess this is added to allow an aggregator to calculate if everything was processed). For clarity: I have no (explicit) aggregator defined in my app.
My first approach was to clear the split-aggregate headers before inserting into queue1. But that didn't do the trick.
Also .split(s -> s.transactional(false)) didn't circumvent this.
EDIT:
Forget the flow/queue naming above. This is my Spring Integration code. I think I included the most relevant beans here.
The first stage creates empty messages from a poller. These are kind of the events that trigger a request to a feed (50 items in json).
Each of the 50 items (split) is saved in the first rabbit queue.
Then the second stage starts (incoming amqp messages are dropped in myChannel2). Via myChannel3 and myChannel4 it eventually is persisted in the second rabbit queue.
These two stages are handled kind-of in parallel. I see that FIRST_RABBIT_QUEUE gets filled every time with 50 new messages.
I also see that the second stage is executed: SECOND_RABBIT_QUEUE get filled (and the counter of the first queue decreases). All fine.
But now the SECOND_RABBIT_QUEUE keeps growing and is never handled by myFlow3.
If the first queue grows faster than it is emptied, both queues (first,second) keep growing. When it is however emptied (counter back to zero), the third stage (myFlow3) starts working!
My configuration:
#Bean
public MessageChannel myChannel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel2() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel3() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel4() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel5() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel myChannel6() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow outputAmqpFlow(final AmqpTemplate amqpTemplate) {
return IntegrationFlows.from(AMQP_OUTPUT)
.handle(Amqp.outboundAdapter(amqpTemplate)
.exchangeName(AmqpConfiguration.TOPIC_EXCHANGE)
.routingKeyExpression("headers['queueRoutingKey']"))
.get();
}
private HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter(AmqpHeaders.CONSUMER_QUEUE);
router.setChannelMapping(AmqpConfiguration.FIRST_RABBIT_QUEUE, "myChannel2");
router.setChannelMapping(AmqpConfiguration.SECOND_RABBIT_QUEUE, "myChannel5");
router.setResolutionRequired(false);
router.setDefaultOutputChannelName("errorChannel");
return router;
}
#Bean
public IntegrationFlow routeIncomingAmqpMessagesFlow(final SimpleMessageListenerContainer simpleMessageListenerContainer,
final Queue firstRabbitQueue,
final Queue secondRabbitQueue,
final Queue thirdRabbitQueue,
final Jackson2JsonMessageConverter jackson2MessageConverter) {
simpleMessageListenerContainer.setQueues(
firstRabbitQueue,
secondRabbitQueue,
thirdRabbitQueue
);
return IntegrationFlows.from(
Amqp.inboundAdapter(simpleMessageListenerContainer)
.messageConverter(jackson2MessageConverter))
.headerFilter("queueRoutingKey")
.route(router())
.get();
}
#Bean
public IntegrationFlow myFlow0() {
return IntegrationFlows.<MessageSource>from(
() -> new GenericMessage<>("trigger flow1"),
c -> c.poller(Pollers.fixedRate(getPeriod(), initialDelay)))
.channel(myChannel1())
.get();
}
#Bean
public IntegrationFlow myFlow1() {
return IntegrationFlows.from(myChannel1())
.handle(String.class, (p, h) -> {
try {
return getLast50MessagesFromWebsite();
} catch (RestClientException e) {
throw new AmqpRejectAndDontRequeueException(e);
}
})
.split()
.enrichHeaders(h -> h.header("queueRoutingKey", AmqpConfiguration.FIRST_RABBIT_QUEUE))
.channel(AMQP_OUTPUT) // persist in rabbit
.get();
}
#Bean
public IntegrationFlow myFlow2_1() {
return IntegrationFlows.from(myChannel2())
.handle(this::downloadAndSave)
.channel(myChannel3())
.get();
}
#Bean
public IntegrationFlow myFlow2_2() {
return IntegrationFlows.from(myChannel3())
.transform(myDomainObjectTransformer)
.handle(this::persistGebiedsinformatieLevering)
.channel(myChannel4())
.get();
}
#Bean
public IntegrationFlow myFlow2_3() {
return IntegrationFlows.from(myChannel4())
.handle(this::confirmMessage)
.enrichHeaders(h -> h.header("queueRoutingKey", AmqpConfiguration.SECOND_RABBIT_QUEUE))
.channel(AMQP_OUTPUT) //persist in rabbit
.get();
}
#Bean
public IntegrationFlow myFlow3() {
return IntegrationFlows.from(myChannel5())
.log(LoggingHandler.Level.INFO)
.get();
}
Transactions are never enabled by default so I don't think that's the issue (unless you have explicitly enabled them).
What you are describing is very odd. Bear in mind the UI is not real time, it only updates every few seconds, so it's not surprising you see it "jump" from 0 to 50.
It seems it only acknowledges items after queue1 is entirely empty.
Consumers know nothing about the queue or its contents.
.split(s -> s.transactional(false)
You should not do that at all; that is enabling transactions (I am surprised it works at all because it should need a transaction manager) but as long as the outbound adapter doesn't have a transactional RabbitTemplate it shouldn't matter.
You need to show your flow definitions and any configuration properties for anyone to help further.
I have multiple modules which communicate each other by mean of a message queue (Spring Rabbit). Some modules produce messages and others consume them. However, a single module can listen different queues, I have a list of queue names in a list, so I created a SimpleMessageListenerContainer for each queue name as follow.
public void build() {
for (String queueName: queues) {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory());
listenerContainer.setQueueNames(queueName);
listenerContainer.setMessageListener(listenerAdapter());
}
}
#Bean
private MessageListenerAdapter listenerAdapter() {
return new MessageListenerAdapter(new MessageListener() {
#Override
public void onMessage(Message message) {
System.out.println(message.getBody());
}
}, "onMessage");
}
This implementation is not working for me, consumer are not registered in the queue and any error or exception is throwing during execution.
Note: I am using Spring and I am limited to not use annotations such as #RabbitListener
When you declare SimpleMessageListenerContainer manually, not as beans, you have also to ensure application context callbacks and lifecycle:
listenerContainer.setApplicationContext()
listenerContainer.setApplicationEventPublisher()
listenerContainer.afterPropertiesSet()
listenerContainer.start()
And don't forget to stop() and destroy() them in the end of application.
I am trying to use an ActiveMQ broker to deliver a message to two consumers listening on an automatic topic, employing Spring Integration facilities.
Here are my configuration beans (in common between publishers and subscribers):
#Value("${spring.activemq.broker-url}")
String brokerUrl;
#Value("${spring.activemq.user}")
String userName;
#Value("${spring.activemq.password}")
String password;
#Bean
public ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerUrl);
connectionFactory.setUserName(userName);
connectionFactory.setPassword(password);
return connectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true); //!!
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setPubSubDomain(true); //!!
return template;
}
Here are beans for consumers:
#Bean(name = "jmsInputChannel")
public MessageChannel jmsInputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
.channel("jmsInputChannel").get();
}
//Consumes the message.
#ServiceActivator(inputChannel="jmsInputChannel")
public void receive(String msg){
System.out.println("Received Message: " + msg);
}
And these are the beans for the producer:
#Bean(name = "jmsOutputChannel")
public MessageChannel jmsOutputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(connectionFactory())
.destination("myTopic")
).get();
}
private static int counter = 1;
#Scheduled(initialDelay=5000, fixedDelay=2000)
public void send() {
String s = "Message number " + counter;
counter++;
jmsOutputChannel().send(MessageBuilder.withPayload(s).build());
}
I am NOT using an embedded ActiveMQ broker. I am using one broker, one producer and two consumers each in their own (docker) container.
My problem is that, while I have invoked setPubSubDomain(true) on both the JmsListenerContainerFactory and the JmsTemplate, my "topics" behave as queues: one consumer prints all the even-numbered messages, while the other prints all the odd-numbered ones.
In fact, by accessing the ActiveMQ web interface, I see that my "topics" (i.e. under the /topics.jsp page) are named ActiveMQ.Advisory.Consumer.Queue.myTopic and ActiveMQ.Advisory.Producer.Queue.myTopic, and "myTopic" does appear in the queues page (i.e. /queues.jsp).
The nodes get started in the following order:
AMQ broker
Consumer 1
Consumer 2
Producer
The first "topic" that gets created is ActiveMQ.Advisory.Consumer.Queue.myTopic, while the producer one appears only after the producer has started, obviously.
I am not an expert on ActiveMQ, so maybe the fact of my producer/consumer "topics" being named ".Queue" is just misleading. However, I do get the semantics described in the official ActiveMQ documentation for queues, rather than topics.
I have also looked at this question already, however all of my employed channels are already of the PublishSubscribeChannel kind.
What I need to achieve is having all messages delivered to all of my (possibly > 2) consumers.
UPDATE: I forgot to mention, my application.properties file already does contain spring.jms.pub-sub-domain=true, along with other settings.
Also, the version of Spring Integration that I am using is 4.3.12.RELEASE.
The problem is, I still get a RR-load-balanced semantics rather than a publish-subscribe semantics.
As for what I can see in the link provided by #Hassen Bennour, I would expect to get a ActiveMQ.Advisory.Producer.Topic.myTopic and a ActiveMQ.Advisory.Consumer.Topic.myTopic row on the list of all topics. Somehow I think I am not using well the Spring Integration libraries, and thus I am setting up a Queue when I want to set up a Topic.
UPDATE 2: Sorry about the confusion. jmsOutputChannel2 is in fact jmsOutputChannel here, I have edited the main part. I am using a secondary "topic" in my code as a check, something for the producer to send message to and receive replies itself. The "topic" name differs as well, so... it's on a separate flow entirely.
I did achieve a little progress by changing the receiver flows in this way:
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
//return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
//.channel("jmsInputChannel").get();
return IntegrationFlows.from(Jms.publishSubscribeChannel(connectionFactory()).destination("myTopic")) //Jms.publishSubscribeChannel() rather than Jms.messageDrivenChannelAdapter()
.channel("jmsInputChannel").get();
}
This produces an advisory topic of type Consumer.Topic.myTopic rather than Consumer.Queue.myTopic on the broker, AND indeed a topic named just myTopic (as I can see from the topics tab). However, once the producer starts, a Producer.Queue advisory topic gets created, and messages get sent there while not being delivered.
The choice of adapter in the input flow seems to determine what kind of advisory consumer topic gets created (Topic vs Queue when switching to Jms.publishSubscribeChannel() from Jms.messageDrivenChannelAdapter()). However, I haven't been able to find something akin for the output flow.
UPDATE 3: Problem solved, thanks to #Hassen Bennour. Recap:
I wired the jmsTemplate() in the producer's Jms.outboundAdapter()
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jsaTemplate())
.destination("myTopic")
).get();
}
And a more complex configuration for the consumer Jms.messageDrivenChannelAdapter():
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
Though this is probably the smoothest and most flexible method, having such a bean...
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}
to wire as a destination for the adapters, rather than just a String.
Thanks again.
add spring.jms.pub-sub-domain=true to application.properties
or
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// the configurer will use PubSubDomain from application.properties if defined or false if not
//so setting it on the factory level need to be set after this
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
return factory;
}
ActiveMQ.Advisory.Consumer.Queue.myTopic is an Advisory topic for a Queue named myTopic
take a look here to read about Advisory
http://activemq.apache.org/advisory-message.html
UPDATE :
update your definitions like below
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jmsTemplate())
.destination("myTopic")
).get();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
or define the Destination as a topic and replace destination("myTopic") by destination(topic())
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}