I need to implement with the help of spring-intagration libraries a message pipeline. At the beginning, as I see now, it needs to contain several elements:
a. Messaging gateway,
#MessagingGateway(name = "entryGateway", defaultRequestChannel = "requestChannel")
public interface MessageGateway {
public boolean processMessage(Message<?> message);
}
which is called when I want to start the pipeline:
messageGateway.processMessage(message);
b. Channel for transmitting the messages:
#Bean
public MessageChannel requestChannel() {
return new DirectChannel();
}
c.Router which decides then where flow the messages
#MessageEndpoint
#Component
public class MessageTypeRouter {
Logger log = Logger.getLogger(MessageTypeRouter.class);
#Router(inputChannel="requestChannel")
public String processMessageByPayload(Message<?> message){...}
There can be many incoming messages in a small period of time, so I wanted to realize a channel (b) as QueueChannel:
#Bean
public MessageChannel requestChannel() {
return new QueueChannel();
}
On the other hand I would like the router to start as soon as a message comes through gateway and the other messages to wait in the queue. But in this case I received an error, which said that I should have used a poller.
May be you could give me a piece of advice, how I can realize my scheme. Thank you in advance.
As we know with XML config we must declare a <poller> component and mark it with default="true". That allows any PollingConsumer endpoint to pick up the default Poller from the context.
With Java config we must declare a #Bean for similar purpose:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(10));
return pollerMetadata;
}
Where the PollerMetadata.DEFAULT_POLLER is specific constant to define the default poller. Although the same name is used from XML config in case of default="true".
From other side #Router annotation has poller attribute to specify something similar like we do with nested <poller> in XML.
Related
I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator
So I'm diving deeper into the world of JMS.
I am writing some dummy projects right now and understanding how to consume messages. I am using Active MQ artemis as the message broker.
Whilst following a tutorial, I stumbled upon something in terms on consuming messages. What exactly is the difference between a message listener to listen for messages and using the #JmsListener annotion?
This is what I have so far:
public class Receiver {
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = "helloworld .q")
public void receive(String message) {
System.out.println("received message='" + message + "'.");
}
}
#Configuration
#EnableJms
public class ReceiverConfig {
#Value("${artemis.broker-url}")
private String brokerUrl;
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory(){
return new ActiveMQConnectionFactory(brokerUrl);
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(){
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMQConnectionFactory());
factory.setConcurrency("3-10");
return factory;
}
#Bean
public DefaultMessageListenerContainer orderMessageListenerContainer() {
SimpleJmsListenerEndpoint endpoint =
new SimpleJmsListenerEndpoint();
endpoint.setMessageListener(new StatusMessageListener("DMLC"));
endpoint.setDestination("helloworld.q"); //Try renaming this and see what happens.
return jmsListenerContainerFactory()
.createListenerContainer(endpoint);
}
#Bean
public Receiver receiver() {
return new Receiver();
}
}
public class StatusMessageListener implements MessageListener {
public StatusMessageListener(String dmlc) {
}
#Override
public void onMessage(Message message) {
System.out.println("In the onMessage().");
System.out.println(message);
}
}
From what I've read is that we register a message listener to the container listener which in turn is created by the listener factory. So essentially the flow is this:
DefaultJmsListenerContainerFactory -> creates -> DefaultMessageListenerContainer -> registers a message listener which is used to listen to messages from the endpoint configured.
From my research, i've gathered that messageListeners are used to asynchornously consume messages from the queues/topic whilst using the #JmsListener annotation is used to synchronously listen to messages?
Furthermore, there's a few other ListenerContainerFactory out there such as DefaultJmsListenerContainerFactory and SimpleJmsListenerContainerFactory but not sure I get the difference. I was reading https://codenotfound.com/spring-jms-listener-example.html and from what I've gathered from that is Default uses a pull model so that suggests it's async so why would it matter if we consume the message via a messageListener or the annotation? I'm a bit confused and muddled up so would like my doubts to be cleared up. Thanks!
This is the snippet of the program when sending 100 dummy messages (just noticed it's not outputting the even numbered messages..):
received message='This the 95 message.'.
In the onMessage().
ActiveMQMessage[ID:006623ca-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24068, durable=true, address=helloworld.q,userID=006623ca-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=00651257-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 97 message.'.
In the onMessage().
ActiveMQMessage[ID:006ba214-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24088, durable=true, address=helloworld.q,userID=006ba214-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=0069cd51-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 99 message.'.
The following configuration
#Configuration
#EnableJms
public class ReceiverConfig {
//your config code here..
}
would ensure that every time a Message is received on the Destination named "helloworld .q", Receiver.receive() is called with the content of the message.
You can read more here: https://docs.spring.io/spring/docs
I have a Spring Boot application which has problems retrieving JMS messages of type TextMessage from an ActiveMQ broker.
If the consumer tries to retrieve messages from the broker it cannot automatically convert a message to TextMessage but treats it as ByteMessage. There is a JmsListener which should read the messages from the queue as TextMessage:
...
#JmsListener(destination = "foo")
public void jmsConsumer(TextMessage message) {
...
The JmsListener produces warnings like the following, and drops the messages:
org.springframework.jms.listener.adapter.ListenerExecutionFailedException: Listener method could not be invoked with incoming message
Endpoint handler details:
Method [public void net.aschemann.demo.springboot.jmsconsumer.JmsConsumer.jmsConsumer(javax.jms.TextMessage)]
Bean [net.aschemann.demo.springboot.jmsconsumer.JmsConsumer#4715f07]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [javax.jms.TextMessage] for org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298, failedMessage=org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298
at org.springframework.jms.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:118) ~[spring-jms-5.1.4.RELEASE.jar:5.1.4.RELEASE]
I have extracted a small sample application
to debug the problem: https://github.com/ascheman/springboot-camel-jms
The producer in real life is a commercial application which makes use of Apache Camel. Hence, I can hardly change/customize the producer. I have tried to build a sample producer which shows the same behavior.
Could I somehow tweak the consumer to treat the message as TextMessage?
Besides: Is there any way to retrieve the additional AMQP properties from the message programmatically directly in Spring? Of course, I could still read the message as ByteMessage and try to parse properties away. But I am looking for a cleaner way which is backed by any Spring API. The Spring #Headers annotation didn't help so far.
I ever faced the same issue with the question owner, after I followed the comment from #AndyWilkinson by adding transport.transformer option on the transportConnector in activemq.xml as the following, the issue is solved.
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.transformer=jms"/>
I had the same error, and it was caused because LazyResolutionMessage is called from MessagingMessageConverter that is the default implementation to MessageConverter, which converts your message (actually it doesn't, since it's the default):
return ((org.springframework.messaging.Message) payload).getPayload();
I have accomplished what you want, at the end my consumer was working like:
#JmsListener(destination = "${someName}")
public void consumeSomeMessages(MyCustomEvent e) {
....
}
What I had to do was:
#Bean(name = "jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory whateverNameYouWant(final ConnectionFactory genericCF) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setErrorHandler(t -> log.error("bad consumer, bad", t));
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setConnectionFactory(genericCF);
factory.setMessageConverter(
new MessageConverter() {
#Override
public Message toMessage(Object object, Session session) {
throw new UnsupportedOperationException("since it's only for consuming!");
}
#Override
public MyCustomEvent fromMessage(Message m) {
try {
// whatever transformation you want here...
// here you could print the message, try casting,
// building new objects with message's attributes, so on...
// example:
return (new ObjectMapper()).readValue(((TextMessage) m).getText(), MyCustomEvent.class);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
);
return factory;
}
A few keypoints:
If your DefaultJmsListenerContainerFactory method is also called jmsListenerContainerFactory you don't need name attribute at Bean annotation
Notice you can also implement an ErrorHandler to deal with exceptions when trying to convert/cast your message's type!
ConnectionFactory was a Spring managed bean with Amazon's SQSConnectionFactory since I wanted to consume from a SQS queue. Please provide your equivalent correctly. Mine was:
#Bean("connectionFactory")
public SQSConnectionFactory someOtherNome() {
return new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
"keyAccess",
"keySecret"
)
)
)
.build()
);
}
If you have a problem with conversion from byte[] to String use:
.convertBodyTo(String.class)
Route example:
from(QUEUE_URL)
.routeId("consumer")
.convertBodyTo(String.class)
.log("${body}")
.to("mock:mockRoute");
I am trying to use an ActiveMQ broker to deliver a message to two consumers listening on an automatic topic, employing Spring Integration facilities.
Here are my configuration beans (in common between publishers and subscribers):
#Value("${spring.activemq.broker-url}")
String brokerUrl;
#Value("${spring.activemq.user}")
String userName;
#Value("${spring.activemq.password}")
String password;
#Bean
public ConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerUrl);
connectionFactory.setUserName(userName);
connectionFactory.setPassword(password);
return connectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true); //!!
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setPubSubDomain(true); //!!
return template;
}
Here are beans for consumers:
#Bean(name = "jmsInputChannel")
public MessageChannel jmsInputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
.channel("jmsInputChannel").get();
}
//Consumes the message.
#ServiceActivator(inputChannel="jmsInputChannel")
public void receive(String msg){
System.out.println("Received Message: " + msg);
}
And these are the beans for the producer:
#Bean(name = "jmsOutputChannel")
public MessageChannel jmsOutputChannel() {
return new PublishSubscribeChannel();
}
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(connectionFactory())
.destination("myTopic")
).get();
}
private static int counter = 1;
#Scheduled(initialDelay=5000, fixedDelay=2000)
public void send() {
String s = "Message number " + counter;
counter++;
jmsOutputChannel().send(MessageBuilder.withPayload(s).build());
}
I am NOT using an embedded ActiveMQ broker. I am using one broker, one producer and two consumers each in their own (docker) container.
My problem is that, while I have invoked setPubSubDomain(true) on both the JmsListenerContainerFactory and the JmsTemplate, my "topics" behave as queues: one consumer prints all the even-numbered messages, while the other prints all the odd-numbered ones.
In fact, by accessing the ActiveMQ web interface, I see that my "topics" (i.e. under the /topics.jsp page) are named ActiveMQ.Advisory.Consumer.Queue.myTopic and ActiveMQ.Advisory.Producer.Queue.myTopic, and "myTopic" does appear in the queues page (i.e. /queues.jsp).
The nodes get started in the following order:
AMQ broker
Consumer 1
Consumer 2
Producer
The first "topic" that gets created is ActiveMQ.Advisory.Consumer.Queue.myTopic, while the producer one appears only after the producer has started, obviously.
I am not an expert on ActiveMQ, so maybe the fact of my producer/consumer "topics" being named ".Queue" is just misleading. However, I do get the semantics described in the official ActiveMQ documentation for queues, rather than topics.
I have also looked at this question already, however all of my employed channels are already of the PublishSubscribeChannel kind.
What I need to achieve is having all messages delivered to all of my (possibly > 2) consumers.
UPDATE: I forgot to mention, my application.properties file already does contain spring.jms.pub-sub-domain=true, along with other settings.
Also, the version of Spring Integration that I am using is 4.3.12.RELEASE.
The problem is, I still get a RR-load-balanced semantics rather than a publish-subscribe semantics.
As for what I can see in the link provided by #Hassen Bennour, I would expect to get a ActiveMQ.Advisory.Producer.Topic.myTopic and a ActiveMQ.Advisory.Consumer.Topic.myTopic row on the list of all topics. Somehow I think I am not using well the Spring Integration libraries, and thus I am setting up a Queue when I want to set up a Topic.
UPDATE 2: Sorry about the confusion. jmsOutputChannel2 is in fact jmsOutputChannel here, I have edited the main part. I am using a secondary "topic" in my code as a check, something for the producer to send message to and receive replies itself. The "topic" name differs as well, so... it's on a separate flow entirely.
I did achieve a little progress by changing the receiver flows in this way:
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
//return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory()).destination("myTopic"))
//.channel("jmsInputChannel").get();
return IntegrationFlows.from(Jms.publishSubscribeChannel(connectionFactory()).destination("myTopic")) //Jms.publishSubscribeChannel() rather than Jms.messageDrivenChannelAdapter()
.channel("jmsInputChannel").get();
}
This produces an advisory topic of type Consumer.Topic.myTopic rather than Consumer.Queue.myTopic on the broker, AND indeed a topic named just myTopic (as I can see from the topics tab). However, once the producer starts, a Producer.Queue advisory topic gets created, and messages get sent there while not being delivered.
The choice of adapter in the input flow seems to determine what kind of advisory consumer topic gets created (Topic vs Queue when switching to Jms.publishSubscribeChannel() from Jms.messageDrivenChannelAdapter()). However, I haven't been able to find something akin for the output flow.
UPDATE 3: Problem solved, thanks to #Hassen Bennour. Recap:
I wired the jmsTemplate() in the producer's Jms.outboundAdapter()
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jsaTemplate())
.destination("myTopic")
).get();
}
And a more complex configuration for the consumer Jms.messageDrivenChannelAdapter():
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
Though this is probably the smoothest and most flexible method, having such a bean...
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}
to wire as a destination for the adapters, rather than just a String.
Thanks again.
add spring.jms.pub-sub-domain=true to application.properties
or
#Bean
public JmsListenerContainerFactory<?> jsaFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// the configurer will use PubSubDomain from application.properties if defined or false if not
//so setting it on the factory level need to be set after this
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
return factory;
}
ActiveMQ.Advisory.Consumer.Queue.myTopic is an Advisory topic for a Queue named myTopic
take a look here to read about Advisory
http://activemq.apache.org/advisory-message.html
UPDATE :
update your definitions like below
#Bean(name = "jmsOutputFlow")
public IntegrationFlow jmsOutputFlow() {
return IntegrationFlows.from(jmsOutputChannel()).handle(Jms.outboundAdapter(jmsTemplate())
.destination("myTopic")
).get();
}
#Bean(name = "jmsInputFlow")
public IntegrationFlow buildReceiverFlow() {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(
Jms.container(connectionFactory(),"myTopic")
.pubSubDomain(true).get()) )
.channel("jmsInputChannel").get();
}
or define the Destination as a topic and replace destination("myTopic") by destination(topic())
#Bean
public Topic topic() {
return new ActiveMQTopic("myTopic");
}
I wrote the following configuration:
#Slf4j
#Configuration
#EnableConfigurationProperties(BatchProperties.class)
public class BatchConfiguration {
#Autowired
private BatchProperties properties;
#Bean
public PollableAmqpChannel testingChannel(final RabbitTemplate rabbitTemplate) {
final PollableAmqpChannel channel = new PollableAmqpChannel(properties.getQueue(), rabbitTemplate);
channel.setLoggingEnabled(false);
return channel;
}
#Bean
#ServiceActivator(inputChannel = "testingChannel", poller = #Poller(fixedRate = "1000", maxMessagesPerPoll = "1"))
public MessageHandler messageHandler(final RabbitTemplate rabbitTemplate) {
return message -> {
log.info("Received: {}", message);
rabbitTemplate.convertAndSend(properties.getQueue(), message);
};
}
}
Message gets successfully read and requeued but I keep getting the following message:
Calling receive with a timeout value on PollableAmqpChannel. The
timeout will be ignored since no receive timeout is supported.
I am using Spring Boot 1.5.3.RELASE.
I've put a breakpoint on :
#Override
public void setLoggingEnabled(boolean loggingEnabled) {
this.loggingEnabled = loggingEnabled;
}
in the AbstractAmqpChannel class. It gets called with 'false' (as it should according to my configuration) and then it gets called again each time it polls a message and it is set to 'true'.
I checked the hashcode and it seems it is my bean, but 'loggingEnabled' gets reset to 'true' with each message.
Is there something wrong with my configuration and how can I fix this?
It looks like a bug - if you have spring-integration-jmx on the classpath, or specify #EnableIntegrationManagement on one of your config classes; the IntegrationManagementConfigurer bean sets logging enabled to true for all IntegrationManagement implementations, overwriting your setting.
Please open a JIRA Issue.
In the meantime, you could add a bean that implements SmartLifecyle to set the flag back to false (in the start() method); it will run after the IntegrationManagementConfigurer.
Of course, you could also set the log category org.springframework.integration.amqp.channel.PollableAmqpChannel to WARN to achieve the same effect much simpler.