Spring for Apache Kafka 2.8.4 under https://docs.spring.io/spring-kafka/reference/html shows some of the listener methods with #Payload annotation next to the message and some do not. For example:
#KafkaListener(id = "cat", topics = "myTopic",
containerFactory = "kafkaManualAckListenerContainerFactory")
public void listen(String data, Acknowledgment ack) {
...
ack.acknowledge();
}
and
#KafkaListener(id = "qux", topicPattern = "myTopic1")
public void listen(#Payload String foo,
#Header(name = KafkaHeaders.RECEIVED_MESSAGE_KEY, required = false) Integer key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
) {
...
}
What approach is correct? I am testing both and see no diference.
The short answer is No, you don't have to use it.
The long answer is "it depends"; if you want to do some validation on Kafka message, #Payload will help you with this; like the following from spring doc
To configure the #KafkaListener to handle null payloads, you must use the #Payload annotation with required = false. If it is a tombstone message for a compacted log, you usually also need the key so that your application can determine which key was “deleted”. The following example shows such a configuration:
I would prefer to use a normal String instead, this way you are away from some known parsing issues like Poison Pill by offloading the responsibility to the application side instead of the built-in way handled by Spring.
Related
I have spring integration flow that gets triggered once a every day, that pulls all parties from database and sends each party to an executorChannel.
The next flow would pull data for each party and then process them parallelly by sending in to a different executor channel.
Challenge i'm facing is how do i know when this entire process ends. Any ideas on how to acheve this .
Here's my pseudo code of executor channels and integration flows.
#Bean
public IntegrationFlow fileListener() {
return IntegrationFlows.from(Files.inboundAdapter(new
File("pathtofile"))).channel("mychannel").get();
}
#Bean
public IntegrationFlow flowOne() throws ParserConfigurationException {
return IntegrationFlows.from("mychannel").handle("serviceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowOne() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelOne").handle("parallelServiceHandlerOne",
"handle").nullChannel();
}
#Bean
public IntegrationFlow parallelFlowTwo() throws ParserConfigurationException {
return IntegrationFlows.from("executorChannelTwo").handle("parallelServiceHandlerTwo",
"handle").nullChannel();
}
#Bean
public MessageChannel executorChannelOne() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Bean
public MessageChannel executorChannelTwo;() {
return new ExecutorChannel(
Executors.newFixedThreadPool(10));
}
#Component
#Scope("prototype")
public class ServiceHandlerOne{
#Autowired
MessageChannel executorChannelOne;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("parties");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelOne.send(message);
});
return message;
}
}
#Component
#Scope("prototype")
public class ParallelServiceHandlerOne{
#Autowired
MessageChannel executorChannelTwo;;
#ServiceActivator
public Message<?> handle(Message<?> message) {
List<?> rowDatas = repository.findAll("party");
rowDatas.stream().forEach(data -> {
Message<?> message = MessageBuilder.withPayload(data).build();
executorChannelTwo;.send(message);
});
return message;
}
}
First of all no reason to make your services as #Scope("prototype"): I don't see any state holding in your services, so they are stateless, therefore can simply be as singleton. Second: since you make your flows ending with the nullChannel(), therefore point in returning anything from your service methods. Therefore just void and flow is going to end over there naturally.
Another observation: you use executorChannelOne.send(message) directly in the code of your service method. The same would be simply achieved if you just return that new message from your service method and have that executorChannelOne as the next .channel() in your flow definition after that handle("parallelServiceHandlerOne", "handle").
Since it looks like you do that in the loop, you might consider to add a .split() in between: the handler return your List<?> rowDatas and splitter will take care for iterating over that data and replies each item to that executorChannelOne.
Now about your original question.
There is really no easy to say that your executors are not busy any more. They might not be at the moment of request just because the message for task has not reached an executor channel yet.
Typically we recommend to use some async synchronizer for your data. The aggregator is a good way to correlate several messages in-the-flight. This way the aggregator collects a group and does not emit reply until that group is completed.
The splitter I've mentioned above adds a sequence details headers by default, so subsequent aggregator can track a message group easily.
Since you have layers in your flow, it looks like you would need a several aggregators: two for your executor channels after splitting, and one top level for the file. Those two would reply to the top-level for the final, per-file grouping.
You also may think about making those parties and party calls in parallel using a PublishSubscribeChannel, which also can be configured with a applySequence=true. This info then will be used by the top-level aggregator for info per file.
See more in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#splitter
https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator
I've configured my consumer to accept messages from a topic in batches. How do I forward it to a new topic?
I want each consumed messages to be forwarded as it's own message. So X amount of messages consumed will produce X amount of messages.
Here's my current setup:
#KafkaListener(topics = "input")
#SendTo("output")
public ConsumerRecords consume(ConsumerRecords records) {
// Do things
return records;
}
And here's the exception thrown:
org.springframework.kafka.KafkaException: No method found for class java.util.ArrayList
That functionality is not supported. In any case, you can't send a ConsumerRecord to a Producer.
This works, though
#KafkaListener(id = "foo", topics = "input")
#SendTo("output")
public List<String> consume(List<String> data) {
return data;
}
(where String is the type created by your deserializer).
I have a Spring Boot application which has problems retrieving JMS messages of type TextMessage from an ActiveMQ broker.
If the consumer tries to retrieve messages from the broker it cannot automatically convert a message to TextMessage but treats it as ByteMessage. There is a JmsListener which should read the messages from the queue as TextMessage:
...
#JmsListener(destination = "foo")
public void jmsConsumer(TextMessage message) {
...
The JmsListener produces warnings like the following, and drops the messages:
org.springframework.jms.listener.adapter.ListenerExecutionFailedException: Listener method could not be invoked with incoming message
Endpoint handler details:
Method [public void net.aschemann.demo.springboot.jmsconsumer.JmsConsumer.jmsConsumer(javax.jms.TextMessage)]
Bean [net.aschemann.demo.springboot.jmsconsumer.JmsConsumer#4715f07]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [javax.jms.TextMessage] for org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298, failedMessage=org.springframework.jms.listener.adapter.AbstractAdaptableMessageListener$MessagingMessageConverterAdapter$LazyResolutionMessage#7c49d298
at org.springframework.jms.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:118) ~[spring-jms-5.1.4.RELEASE.jar:5.1.4.RELEASE]
I have extracted a small sample application
to debug the problem: https://github.com/ascheman/springboot-camel-jms
The producer in real life is a commercial application which makes use of Apache Camel. Hence, I can hardly change/customize the producer. I have tried to build a sample producer which shows the same behavior.
Could I somehow tweak the consumer to treat the message as TextMessage?
Besides: Is there any way to retrieve the additional AMQP properties from the message programmatically directly in Spring? Of course, I could still read the message as ByteMessage and try to parse properties away. But I am looking for a cleaner way which is backed by any Spring API. The Spring #Headers annotation didn't help so far.
I ever faced the same issue with the question owner, after I followed the comment from #AndyWilkinson by adding transport.transformer option on the transportConnector in activemq.xml as the following, the issue is solved.
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.transformer=jms"/>
I had the same error, and it was caused because LazyResolutionMessage is called from MessagingMessageConverter that is the default implementation to MessageConverter, which converts your message (actually it doesn't, since it's the default):
return ((org.springframework.messaging.Message) payload).getPayload();
I have accomplished what you want, at the end my consumer was working like:
#JmsListener(destination = "${someName}")
public void consumeSomeMessages(MyCustomEvent e) {
....
}
What I had to do was:
#Bean(name = "jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory whateverNameYouWant(final ConnectionFactory genericCF) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setErrorHandler(t -> log.error("bad consumer, bad", t));
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setConnectionFactory(genericCF);
factory.setMessageConverter(
new MessageConverter() {
#Override
public Message toMessage(Object object, Session session) {
throw new UnsupportedOperationException("since it's only for consuming!");
}
#Override
public MyCustomEvent fromMessage(Message m) {
try {
// whatever transformation you want here...
// here you could print the message, try casting,
// building new objects with message's attributes, so on...
// example:
return (new ObjectMapper()).readValue(((TextMessage) m).getText(), MyCustomEvent.class);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
);
return factory;
}
A few keypoints:
If your DefaultJmsListenerContainerFactory method is also called jmsListenerContainerFactory you don't need name attribute at Bean annotation
Notice you can also implement an ErrorHandler to deal with exceptions when trying to convert/cast your message's type!
ConnectionFactory was a Spring managed bean with Amazon's SQSConnectionFactory since I wanted to consume from a SQS queue. Please provide your equivalent correctly. Mine was:
#Bean("connectionFactory")
public SQSConnectionFactory someOtherNome() {
return new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
"keyAccess",
"keySecret"
)
)
)
.build()
);
}
If you have a problem with conversion from byte[] to String use:
.convertBodyTo(String.class)
Route example:
from(QUEUE_URL)
.routeId("consumer")
.convertBodyTo(String.class)
.log("${body}")
.to("mock:mockRoute");
I'm sending messages to ibm mq with some correlationId (unique for each message). Then I want to read from output queue this concrete message with specific correlationId, and i want it to be non-blocking to use it in java webflux controller.
I'm wondering if there is a way to do it without lot of pain? Options like jmsTemplate.receiveSelected(...) is blocking, while creating a bean implementing interface MessageListener doesn't provide a way to select message by dynamic selector(i.e. correlationId is unique for each message).
You could use spring MessageListener to retrieve all messages and connect it with controller by Mono.create(...) and your own event listener which trigger result Mono
// Consumes message and trigger result Mono
public interface MyEventListener extends Consumer<MyOutputMessage> {}
Class to route incoming messages to correct MyEventListener
public class MyMessageProcessor {
// You could use in-memory cache here if you need ttl etc.
private static final ConcurrentHashMap<String, MyEventListener> REGISTRY
= new ConcurrentHashMap<>();
public void register(String correlationId, MyEventListener listener) {
MyEventListener oldListeer = REGISTRY.putIfAbsent(correlationId, listener);
if (oldListeer != null)
throw new IllegalStateException("Correlation ID collision!");
}
public void unregister(String correlationId) {
REGISTRY.remove(correlationId);
}
public void accept(String correlationId, MyOutputMessage myOutputMessage) {
Optional.ofNullable(REGISTRY.get(correlationId))
.ifPresent(listener -> listener.accept(myOutputMessage));
}
}
Webflux controller
private final MyMessageProcessor messageProcessor;
....
#PostMapping("/process")
Mono<MyOutputMessage> process(Mono<MyInputMessage> inputMessage) {
String correlationId = ...; //generate correlationId
// then send message asynchronously
return Mono.<MyOutputMessage>create(sink ->
// create and save MyEventListener which call MonoSink.success
messageProcessor.register(correlationId, sink::success))
// define timeout if you don't want to wait forever
.timeout(...)
// cleanup MyEventListener after success, error or cancel
.doFinally(ignored -> messageProcessor.unregister(correlationId));
}
And into onMessage of your JMS MessageListener implementation you could call
messageProcessor.accept(correlationId, myOutputMessage);
You could find similar example for Flux in the reactor 3 reference guide
I currently have a FooListener that listens to a queue containing Foo messages. How do I add another BarListener class to listen to the same queue for Bar messages?
My RabbitMQ is currently configured like this:
#Configuration
public class RabbitMQConfig {
#Bean
public MessageListenerContainer messageListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueues(workQueue());
container.setMessageListener(new MessageListenerAdapter(fooListener(), new JsonMessageConverter()));
container.setDefaultRequeueRejected(false);
return container;
}
}
There is currently no in-built support to route to different listeners according the payload type.
You can write a simple listener wrapper...
public void handleMessage(Object payload) {
if (payload instanceof Foo) {
this.fooListener.handleMessage((Foo) payload);
}
else if (payload instanceof Bar) {
this.barListener.handleMessage((Bar) payload);
}
else {
// unexpected payload type
}
}
EDIT:
Spring AMQP 1.5 (currently at milestone 1 - 1.5.0.M1) now supports this feature; see what's new and blog announcement.
Use the same queue to different messages I think is not the best option.
You could have one exchange with two routing keys and different bindings with two queues. The other way is use a wrapper as Gary Russell says but castings don't have good performance and besides this solution is not very "Single responsibility principle".
Regards.