Error handling in Spring Cloud Stream Kafka in Batch mode - java

I am using Spring Cloud Stream and Kafka Binder to consume messages in batches from a Kafka Topic. I am trying to implement an error handling mechanism. As per my understanding I can't use Spring Cloud Stream's enableDLQ property in batch mode.
I have found RecoveringBatchErrorHandler and DeadLetterPublishingRecoverer to retry and send failure messages from the spring-kafka documentation. But I am not able to understand how to send the records to a custom DLQ topic following the functional programming standards. All the examples I can see is using KafkaTemplates.
Are there any good example where I can find the implementation?
This is the spring doc I have been referring to.
https://docs.spring.io/spring-kafka/docs/2.5.12.RELEASE/reference/html/#recovering-batch-eh

That version is no longer supported as OSS https://spring.io/projects/spring-kafka#support
With the current version, use the DefaultErrorHandler configured with a DeadLetterPublishingRecoverer and throw a BatchListenerExcecutionFailedException to tell the framework which record in the batch failed.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-error-handling and https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters and https://docs.spring.io/spring-kafka/docs/current/reference/html/#legacy-eh
Add a ListenerContainerCustomizer bean to add your configured error handler to the listener container.

Related

How to read Kafka Topic from beginning with Spring Boot

https://github.com/ekimcoskun/KafkaMessageApp/tree/main/tata/src/main/java/com/example/tata/comp
I am trying to read kafka topic from beginning and list on Frontend (React)
If you configure auto.offset.reset=earliest property on your consumer factory, then the consumer group will start from the latest offset upon startup. If there is an offset commit, though, the app will start from the committed offset
See all properties here https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka
If you want to render a UI for data, you may want to look at using the InteractiveQueryService class

Axon Config - Kafka retry policies after #eventhandlers thrown exception

I've started to use Axon 4.3.1 (latest version) in my project and having a problem.
Where can I config the kafka retry policies after #eventhandler throw an exception?
OBS: I'm using SubscribingEventProcessor type as event processor (both projects). I'm using separate projects! Command model use mongo and publish events on Kafka. Query model consume events from Kafka (eventbus). In this way, using separate JVMs.
#processinggroup(event-processor) is configured to class with event-handler method. I'd like to have a config to Kafka auto retry after some time in error cases (from query model project).
Can I use some default Axon component? Could I use something like spring-retry or internal kafka configs itself?
I've found something like that (documentation):
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/event-processing/event-processors#error-handling
"Based on the provided ErrorContext object, you can decide to ignore the error, schedule retries, perform dead-letter-queue delivery or rethrow the exception."
How can I config (for example, schedule retries) on #eventhandler after errors?
Could you help me?
Thanks.
The current implementation of Axon's Kafka Extension (version 4.0-M2) does not support setting a retry policy when it comes to event handling.
I'd argue your best approach right now is to set up something like that on Kafka, if that's even possible. Otherwise, forcing a replay of the events through Kafka would be your best approach.

Spring Boot JMS - when should a message be sent on a #Transacted method?

I was wondering on a Spring Boot method marked as #Transactional, when should a message appear on the queue? I’m asking because I’ve noticed 2 different behaviours on 2 different applications.
Both applications have the following in common:
Spring Boot 2.0 based
JMS message sending is using JmsTemplate, with setSessionTransacted set to true
No explicit transaction settings configured
There is a Mongo DB used (using Spring Data) and a record is being modified in the same method as the message is sent
The major difference between the two applications is:
One has a JPA data source (using Spring Dataj and a record is read and/or written in this method. The other application does not have this data source.
The difference in observed behaviour is that when the JPA source is present, the message is sent at the end of method. Without it, the message is sent immediately.
Is this the expected behaviour?
Can I configure the applications to behave the same way? Ideally I’d like the message to be sent at the end (so any Mongo changes that fail would cancel the message send and rollback any JPA changes made)? I realise that Mongo changes are not part of any transaction created.
Thanks
With JMS and a DB you have two resources.
To have a full transactional behavior you need distributed transactions support.
If you don't have this even when the message is sent as last operation if the sending fails the data is changed in the database anyway.
To configure distributed transaction you need JTA. This is described here:
With JMS and a DB you have two resources. To have a full transactional behavoir you need distributed transactions. We use Bistronix in our application and this works very well.
Have a look at the docs: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-jta.html

Kafka serialize-deserialize issue

We are using java configuration for producer to achieve asynchronous retry mechanism
and on the consumer side we are using message driven adapter(xml configuration).
identify that in the producer side we have
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,MessageSerializer.class.getName());
on producer ,we dont have option for setting the encoder implementation as it only support the serializer interface implementation
MessageSerializer implements Serializer
and on the consumer side we don't have deserialize option as it only support decoder interface implementation:
<int-kafka:message-driven-channel-adapter
id="inAdapter"
channel="fromKafka"
connection-factory="connectionFactory"
key-decoder="kafkaKeyDecoder"
payload-decoder="kafkaDecoder"
topics="${topic.list}"
offset-manager="offsetManager"/>
kafkaDecoder implements Decoder
Therfore getting error during serialization,can you please suggest on how to handle this.
You are using an old version of spring-integration-kafka; it is not configured that way anymore; the current version is 2.1.0 and it sits on top of spring-kafka 1.1.2.
The integration components are documented in the spring-kafka reference. Configuring Spring Kafka itself is elsewhere in that book.

Spring integration: receive messages from multiple JMS destinations

I am using java DSL to configure my channel adapters. The thing I want to achieve can be described with the following piece of code:
IntegrationFlows
.from(Jms.messageDriverChannelAdapter(mqCacheConnectionFactory)
.configureListenerContainer(container -> container.sessionTransacted(transacted))
.destinations(inputDestination1, inputDestination2) // missing method
.autoStartup(autoStartup)
.id(channelName)
.errorChannel(errorChannel)
)
.channel(commonChannel)
.get();
So I would like to have messageDriverChannelAdapter that would be capable of receiving from multiple JMS destinations. Is it achievable?
No, it isn't possible.
The Spring Integration JMS support is fully based on the Spring JMS foundation. And its AbstractMessageListenerContainer provides ability to consume only one destination. Therefore Jms.messageDriverChannelAdapter() doesn't provide an option to configure several destinations to listen to.
Only option you have is configure several Jms.messageDriverChannelAdapter()s. What is good with Spring Integration that you can output them all to the same MessageChannel and you won't have so much copy/paste hell.

Categories