Spring Integration JMS creating ActiveMQ queue instead of topic - java

I am trying to use an ActiveMQ broker to deliver a message to two consumers listening on an automatic topic, employing Spring Integration facilities.
Here are my configuration beans (in common between publishers and subscribers):
#Value("${spring.activemq.broker-url}")
String brokerUrl;
#Value("${spring.activemq.user}")
String userName;
#Value("${spring.activemq.password}")
String password;
#Bean
public ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerUrl);
connectionFactory.setUserName(userName);
connectionFactory.setPassword(password);
return connectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true); //!!
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setPubSubDomain(true); //!!
return template;
}
Here are beans for consumers:
#Bean(name = "jmsInputChannel")
public MessageChannel jmsInputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
.channel("jmsInputChannel").get();
}
//Consumes the message.
#ServiceActivator(inputChannel="jmsInputChannel")
public void receive(String msg){
System.out.println("Received Message: " + msg);
}
And these are the beans for the producer:
#Bean(name = "jmsOutputChannel")
public MessageChannel jmsOutputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(connectionFactory())
.destination("myTopic")
).get();
}
private static int counter = 1;
#Scheduled(initialDelay=5000, fixedDelay=2000)
public void send() {
String s = "Message number " + counter;
counter++;
jmsOutputChannel().send(MessageBuilder.withPayload(s).build());
}
I am NOT using an embedded ActiveMQ broker. I am using one broker, one producer and two consumers each in their own (docker) container.
My problem is that, while I have invoked setPubSubDomain(true) on both the JmsListenerContainerFactory and the JmsTemplate, my "topics" behave as queues: one consumer prints all the even-numbered messages, while the other prints all the odd-numbered ones.
In fact, by accessing the ActiveMQ web interface, I see that my "topics" (i.e. under the /topics.jsp page) are named ActiveMQ.Advisory.Consumer.Queue.myTopic and ActiveMQ.Advisory.Producer.Queue.myTopic, and "myTopic" does appear in the queues page (i.e. /queues.jsp).
The nodes get started in the following order:
AMQ broker
Consumer 1
Consumer 2
Producer
The first "topic" that gets created is ActiveMQ.Advisory.Consumer.Queue.myTopic, while the producer one appears only after the producer has started, obviously.
I am not an expert on ActiveMQ, so maybe the fact of my producer/consumer "topics" being named ".Queue" is just misleading. However, I do get the semantics described in the official ActiveMQ documentation for queues, rather than topics.
I have also looked at this question already, however all of my employed channels are already of the PublishSubscribeChannel kind.
What I need to achieve is having all messages delivered to all of my (possibly > 2) consumers.
UPDATE: I forgot to mention, my application.properties file already does contain spring.jms.pub-sub-domain=true, along with other settings.
Also, the version of Spring Integration that I am using is 4.3.12.RELEASE.
The problem is, I still get a RR-load-balanced semantics rather than a publish-subscribe semantics.
As for what I can see in the link provided by #Hassen Bennour, I would expect to get a ActiveMQ.Advisory.Producer.Topic.myTopic and a ActiveMQ.Advisory.Consumer.Topic.myTopic row on the list of all topics. Somehow I think I am not using well the Spring Integration libraries, and thus I am setting up a Queue when I want to set up a Topic.
UPDATE 2: Sorry about the confusion. jmsOutputChannel2 is in fact jmsOutputChannel here, I have edited the main part. I am using a secondary "topic" in my code as a check, something for the producer to send message to and receive replies itself. The "topic" name differs as well, so... it's on a separate flow entirely.
I did achieve a little progress by changing the receiver flows in this way:
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
//return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
//.channel("jmsInputChannel").get();
return IntegrationFlows.from(Jms.publishSubscribeChannel(connectionFactory()).destination("myTopic")) //Jms.publishSubscribeChannel() rather than Jms.messageDrivenChannelAdapter()
.channel("jmsInputChannel").get();
}
This produces an advisory topic of type Consumer.Topic.myTopic rather than Consumer.Queue.myTopic on the broker, AND indeed a topic named just myTopic (as I can see from the topics tab). However, once the producer starts, a Producer.Queue advisory topic gets created, and messages get sent there while not being delivered.
The choice of adapter in the input flow seems to determine what kind of advisory consumer topic gets created (Topic vs Queue when switching to Jms.publishSubscribeChannel() from Jms.messageDrivenChannelAdapter()). However, I haven't been able to find something akin for the output flow.
UPDATE 3: Problem solved, thanks to #Hassen Bennour. Recap:
I wired the jmsTemplate() in the producer's Jms.outboundAdapter()
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jsaTemplate())
.destination("myTopic")
).get();
}
And a more complex configuration for the consumer Jms.messageDrivenChannelAdapter():
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
Though this is probably the smoothest and most flexible method, having such a bean...
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}
to wire as a destination for the adapters, rather than just a String.
Thanks again.

add spring.jms.pub-sub-domain=true to application.properties
or
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// the configurer will use PubSubDomain from application.properties if defined or false if not
//so setting it on the factory level need to be set after this
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
return factory;
}
ActiveMQ.Advisory.Consumer.Queue.myTopic is an Advisory topic for a Queue named myTopic
take a look here to read about Advisory
http://activemq.apache.org/advisory-message.html
UPDATE :
update your definitions like below
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jmsTemplate())
.destination("myTopic")
).get();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
or define the Destination as a topic and replace destination("myTopic") by destination(topic())
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}

Related

Does ActiveMQ Artemis support updating scheduled messages in a last value queue?

In my testing and review of the Artemis LastValueQueue code, it looks like the scheduling delay for a message takes precedence over its evaluation of the "last-value-key". In other words, if you schedule a message, it is only evaluated for replacing the last-value in the queue at the time it is prepared for delivery.
My question is whether I have correctly understood the code, and if so, if there's a workaround or a feature of ActiveMQ / Artemis that might help meet our requirements.
Our requirements are as follows:
Generate a message, and delay processing of that message to a point in the future (usually 30 seconds out).
If an updated version of the message is generated due to a new external event, replace any existing scheduled message with the new version of the message - the scheduled delivery time should also be updated, in addition to the message payload.
Some other notes:
My current prototype is using Artemis embedded server
Spring-jms JmsTemplate is being used to produce messages
Spring-jms JmsListenerContainerFactory is being used to consume messages
We don't currently use SpringBoot, so you'll see some bean setup below.
ArtemisConfig.java:
#Configuration
#EnableJms
public class ArtemisConfig {
#Bean
public org.apache.activemq.artemis.core.config.Configuration configuration() throws Exception {
org.apache.activemq.artemis.core.config.Configuration config = new ConfigurationImpl();
config.addAcceptorConfiguration("in-vm", "vm://0");
config.setPersistenceEnabled(true);
config.setSecurityEnabled(false);
config.setJournalType(JournalType.ASYNCIO);
config.setCreateJournalDir(true);
config.setJournalDirectory("/var/mq/journal");
config.setBindingsDirectory("/var/mq/bindings");
config.setLargeMessagesDirectory("/var/mq/large-messages");
config.setJMXManagementEnabled(true);
QueueConfiguration queueConfiguration = new QueueConfiguration("MYLASTVALUEQUEUE");
queueConfiguration.setAddress("MYLASTVALUEQUEUE");
queueConfiguration.setLastValueKey("uniqueJobId");
queueConfiguration.setDurable(true);
queueConfiguration.setEnabled(true);
queueConfiguration.setRoutingType(RoutingType.ANYCAST);
CoreAddressConfiguration coreAddressConfiguration = new CoreAddressConfiguration();
coreAddressConfiguration.addQueueConfiguration(queueConfiguration);
config.addAddressConfiguration(coreAddressConfiguration);
return config;
}
#Bean
public EmbeddedActiveMQ artemisServer() throws Exception {
EmbeddedActiveMQ server = new EmbeddedActiveMQ();
server.setConfiguration(configuration());
server.start();
return server;
}
#PreDestroy
public void preDestroy() throws Exception {
artemisServer().stop();
}
#Bean
public ConnectionFactory activeMqConnectionFactory() throws Exception {
return ActiveMQJMSClient.createConnectionFactory("vm://0", "artemis-client");
}
#Bean
public JmsListenerContainerFactory<?> jmsListenerContainerFactory() throws Exception {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMqConnectionFactory());
factory.setSessionTransacted(true);
factory.setConcurrency("8");
factory.setMessageConverter(jacksonJmsMessageConverter());
return factory;
}
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
#Bean
public JmsTemplate jmsTemplate() throws Exception {
JmsTemplate jmsTemplate = new JmsTemplate(activeMqConnectionFactory());
jmsTemplate.setMessageConverter(jacksonJmsMessageConverter());
jmsTemplate.setDeliveryPersistent(true);
return jmsTemplate;
}
#Bean
QueueMessageService queueMessageService() {
return new QueueMessageService();
}
}
QueueMessageService.java
public class QueueMessageService {
#Resource
private JmsTemplate jmsTemplate;
public void queueJobRequest(
final String queue,
final int priority,
final long deliveryDelayInSeconds,
final MyMessage message) {
jmsTemplate.convertAndSend(queue, jobRequest, message -> {
message.setJMSPriority(priority);
if (deliveryDelayInSeconds > 0 && deliveryDelayInSeconds <= 86400) {
message.setLongProperty(
Message.HDR_SCHEDULED_DELIVERY_TIME.toString(),
Instant.now().plus(deliveryDelayInSeconds, ChronoUnit.SECONDS).toEpochMilli()
);
}
message.setStringProperty(Message.HDR_LAST_VALUE_NAME.toString(), "uniqueJobId");
message.setStringProperty("uniqueJobId", jobRequest.getUniqueJobId().toString());
return message;
});
}
}
Your understanding about the semantics of scheduled messages with a last-value queue is correct. When a message is scheduled it is not technically on the queue yet. It is not put onto the queue until the scheduled time arrives at which point last-value queue semantics are enforced.
Short of implementing a new feature I don't see how you can implement your desired behavior in any kind of automatic way. My recommendation at this point would be to use the management API (i.e. QueueControl) to manually remove the "old" scheduled message before you send the "new" scheduled message. You can use one of the removeMessage methods for this as they will work on scheduled messages and non-scheduled messages alike.

Whats the difference between message listeners and jmslistener annotation

So I'm diving deeper into the world of JMS.
I am writing some dummy projects right now and understanding how to consume messages. I am using Active MQ artemis as the message broker.
Whilst following a tutorial, I stumbled upon something in terms on consuming messages. What exactly is the difference between a message listener to listen for messages and using the #JmsListener annotion?
This is what I have so far:
public class Receiver {
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = "helloworld .q")
public void receive(String message) {
System.out.println("received message='" + message + "'.");
}
}
#Configuration
#EnableJms
public class ReceiverConfig {
#Value("${artemis.broker-url}")
private String brokerUrl;
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory(){
return new ActiveMQConnectionFactory(brokerUrl);
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(){
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMQConnectionFactory());
factory.setConcurrency("3-10");
return factory;
}
#Bean
public DefaultMessageListenerContainer orderMessageListenerContainer() {
SimpleJmsListenerEndpoint endpoint =
new SimpleJmsListenerEndpoint();
endpoint.setMessageListener(new StatusMessageListener("DMLC"));
endpoint.setDestination("helloworld.q"); //Try renaming this and see what happens.
return jmsListenerContainerFactory()
.createListenerContainer(endpoint);
}
#Bean
public Receiver receiver() {
return new Receiver();
}
}
public class StatusMessageListener implements MessageListener {
public StatusMessageListener(String dmlc) {
}
#Override
public void onMessage(Message message) {
System.out.println("In the onMessage().");
System.out.println(message);
}
}
From what I've read is that we register a message listener to the container listener which in turn is created by the listener factory. So essentially the flow is this:
DefaultJmsListenerContainerFactory -> creates -> DefaultMessageListenerContainer -> registers a message listener which is used to listen to messages from the endpoint configured.
From my research, i've gathered that messageListeners are used to asynchornously consume messages from the queues/topic whilst using the #JmsListener annotation is used to synchronously listen to messages?
Furthermore, there's a few other ListenerContainerFactory out there such as DefaultJmsListenerContainerFactory and SimpleJmsListenerContainerFactory but not sure I get the difference. I was reading https://codenotfound.com/spring-jms-listener-example.html and from what I've gathered from that is Default uses a pull model so that suggests it's async so why would it matter if we consume the message via a messageListener or the annotation? I'm a bit confused and muddled up so would like my doubts to be cleared up. Thanks!
This is the snippet of the program when sending 100 dummy messages (just noticed it's not outputting the even numbered messages..):
received message='This the 95 message.'.
In the onMessage().
ActiveMQMessage[ID:006623ca-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24068, durable=true, address=helloworld.q,userID=006623ca-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=00651257-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 97 message.'.
In the onMessage().
ActiveMQMessage[ID:006ba214-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24088, durable=true, address=helloworld.q,userID=006ba214-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=0069cd51-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 99 message.'.
The following configuration
#Configuration
#EnableJms
public class ReceiverConfig {
//your config code here..
}
would ensure that every time a Message is received on the Destination named "helloworld .q", Receiver.receive() is called with the content of the message.
You can read more here: https://docs.spring.io/spring/docs

Consuming messages in parallel from ActiveMQ

Whenever I am posting a message to the queue the first time the message gets picked up without any issue, but when I drop the second file the message is in the "pending" state the thread sleeping time (2 minutes). To test the Concurrency working in ActiveMQ I have added the bean called ThreadService.
I have the code like below in the JMSConfig.java
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61616");
connectionFactory.setPassword("admin");
connectionFactory.setUserName("admin");
connectionFactory.setTrustedPackages(Arrays.asList("com.jms.domain", "java.util"));
connectionFactory.setMaxThreadPoolSize(1);
return connectionFactory;
}
#Bean(destroyMethod = "stop", initMethod = "start")
#Primary
public PooledConnectionFactory pooledConnectionFactory(ConnectionFactory connectionFactory) {
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections("8");
pooledConnectionFactory.setMaximumActiveSessionPerConnection("10");
return pooledConnectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> queueListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setConcurrency("1-5");
return factory;
}
CamelRouter.java
from("file://E:/Camel")
.bean(ThreadService)
.to("activemq:MessageQueue");
ThreadService.java
public void ThreadService throws Exception {
Thread.sleep(120000);
}
How can I achieve concurrency in ActiveMQ which dequeues message in pending state in parallel?
I am confused because your question subject is about consuming and your route is producing to ActiveMQ
Parallel consumers
If you want to consume in parallel from a JMS queue, you normally configure multiple consumers.
If you want to do this for an individual consumer, you can append it to the endpoint URI
from("activemq:queue:myQueue?concurrentConsumers=5"
if you want to apply this as default for every consumer, you can configure it in your bean setup
#Bean
public JmsConfiguration jmsConfiguration() {
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConnectionFactory(pooledConnectionFactory());
jmsConfiguration.setConcurrentConsumers(5);
return jmsConfiguration;
}
#Bean(name = "activemq")
public ActiveMQComponent activeMq() {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration());
return activeMQComponent;
}
Parallel producers
Well, your JMS producing route has a file consumer that is per definition single threaded to avoid processing the same file with multiple consumers.
However, you can turn your route multithreaded after file consumption with the Threads DSL of Camel
from("file://E:/Camel")
.threads(5) // continue asynchronous from here with 5 threads
.bean(ThreadService)
.to("activemq:MessageQueue");
Like this your "long running task" in ThreadService should no more block other files because the route continues asynchronous with 5 threads from the threads statement. The file consumer stays single threaded.
But be aware! The threads statement breaks the current transaction. The file consumer hands the message over to a new thread. If an error occurs later, the file consumer does not see it.

Spring Integration: how to read selected messages from a queue

If I have one queue and multiple subscribers, how do I code the subscribers to only remove the messages they are interested in? I can use a PublishSubscribeChannel to send the message to all subscribers, but it has no filtering feature, and I'm not clear if the messages are ever removed after delivery. Another option is to read all messages, and filter in the subscriber, but then I need to invent a Kafka-ish behavior for message indexing to prevent messages already seen being processed again.
Well, indeed there is no such a persistent topic abstraction in Spring Integration out-of-the-box. However, since you say you need an in-memory solution, how about to consider to start embedded ActiveMQ and use Jms.publishSubscribeChannel() based on the Topic destination? Right, there is still no selector from the Spring Integration subscribers even for this type of the MessageChannel, but you still can use .filter() to discard messages you are not interested in.
The same you can reach with the Hazelcast ITopic:
#Bean
public ITopic<Message<?>> siTopic() {
return hazelcastInstance().getTopic("siTopic");
}
#Bean
public IntegrationFlow subscriber1() {
return IntegrationFlows.from(
Flux.create(messageFluxSink ->
siTopic()
.addMessageListener(message ->
messageFluxSink.next(message.getMessageObject()))))
.filter("headers.myHeader == foo")
.get();
}
#Bean
public IntegrationFlow subscriber2() {
return IntegrationFlows.from(
Flux.create(messageFluxSink ->
siTopic()
.addMessageListener(message ->
messageFluxSink.next(message.getMessageObject()))))
.filter("headers.myHeader == bar")
.get();
}
Well, actually looking to your plain in-memory model, I even would say that simple QueueChannel and bridge to the PublishSubscribeChannel with the mentioned filter in each subscriber should be fully enough for you:
#Bean
public PollableChannel queueChannel() {
return new QueueChannel();
}
#Bean
#BridgeFrom("queueChannel")
public MessageChannel publishSubscribeChannel() {
return new PublishSubscribeChannel();
}
#Bean
public IntegrationFlow subscriber1() {
return IntegrationFlows.from(publishSubscribeChannel())
.filter("headers.myHeader == foo")
.get();
}
#Bean
public IntegrationFlow subscriber2() {
return IntegrationFlows.from(publishSubscribeChannel())
.filter("headers.myHeader == bar")
.get();
}
UPDATE
And one more option to use instead of PublishSubscribeChannel and filter combination is like RecipientListRouter: https://docs.spring.io/spring-integration/docs/5.0.3.RELEASE/reference/html/messaging-routing-chapter.html#router-implementations-recipientlistrouter

Find an appropriate queue-consumer pair

I need to implement with the help of spring-intagration libraries a message pipeline. At the beginning, as I see now, it needs to contain several elements:
a. Messaging gateway,
#MessagingGateway(name = "entryGateway", defaultRequestChannel = "requestChannel")
public interface MessageGateway {
public boolean processMessage(Message<?> message);
}
which is called when I want to start the pipeline:
messageGateway.processMessage(message);
b. Channel for transmitting the messages:
#Bean
public MessageChannel requestChannel() {
return new DirectChannel();
}
c.Router which decides then where flow the messages
#MessageEndpoint
#Component
public class MessageTypeRouter {
Logger log = Logger.getLogger(MessageTypeRouter.class);
#Router(inputChannel="requestChannel")
public String processMessageByPayload(Message<?> message){...}
There can be many incoming messages in a small period of time, so I wanted to realize a channel (b) as QueueChannel:
#Bean
public MessageChannel requestChannel() {
return new QueueChannel();
}
On the other hand I would like the router to start as soon as a message comes through gateway and the other messages to wait in the queue. But in this case I received an error, which said that I should have used a poller.
May be you could give me a piece of advice, how I can realize my scheme. Thank you in advance.
As we know with XML config we must declare a <poller> component and mark it with default="true". That allows any PollingConsumer endpoint to pick up the default Poller from the context.
With Java config we must declare a #Bean for similar purpose:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(10));
return pollerMetadata;
}
Where the PollerMetadata.DEFAULT_POLLER is specific constant to define the default poller. Although the same name is used from XML config in case of default="true".
From other side #Router annotation has poller attribute to specify something similar like we do with nested <poller> in XML.

Categories